Forgot your password?
typodupeerror
Graphics Software Hardware

Nvidia Claims Intel's Larrabee Is "a GPU From 2006" 278

Posted by Soulskill
from the dem's-fightin'-woids dept.
Barence sends this excerpt from PC Pro: "Nvidia has delivered a scathing criticism of Intel's Larrabee, dismissing the multi-core CPU/GPU as wishful thinking — while admitting it needs to catch up with AMD's current Radeon graphics cards. 'Intel is not a stupid company,' conceded John Mottram, chief architect for the company's GT200 core. 'They've put a lot of people behind this, so clearly they believe it's viable. But the products on our roadmap are competitive to this thing as they've painted it. And the reality is going to fall short of the optimistic way they've painted it. As [blogger and CPU architect] Peter Glaskowsky said, the "large" Larrabee in 2010 will have roughly the same performance as a 2006 GPU from Nvidia or ATI.' Speaking ahead of the opening of the annual NVISION expo on Monday, he also admitted Nvidia 'underestimated ATI with respect to their product.'"
This discussion has been archived. No new comments can be posted.

Nvidia Claims Intel's Larrabee Is "a GPU From 2006"

Comments Filter:
  • Doh of the Day (Score:5, Insightful)

    by eddy (18759) on Sunday August 24, 2008 @08:55AM (#24725623) Homepage Journal

    ...he also admitted Nvidia 'underestimated ATI with respect to their product.'

    Good, learn from that and don't make that same mistake again!

    Larrabee [...] will have roughly the same performance as a 2006 GPU from Nvidia or ATI.'

    DOH!

    • Re:Doh of the Day (Score:5, Interesting)

      by geoskd (321194) on Sunday August 24, 2008 @12:49PM (#24726987)
      Intel has made some bad mis-steps in the past, and one of them was failing to design their processors around the strengths and weakness' of their memory architecture. Rambus is a prime example. It was a superior solution for the wrong problem, and Intel failed to design their processors to take advantage of the memory's strengths, and it looks like they are doing it again. The limiting factor in CPU / GPU performance isn't how many instructions you can pound into any given second, its how much total memory can you get at, in that time frame. It does you no good to be able to process 16 billion pixels / second, when you can only get the data for 4 billion per second from your memories. Better to build a system that can get 6 Billion per second from the memory, and can process only 6 billion per second. That is the fundamental problem that Nvidia seems to understand, and Intel doesn't.

      -=Geoskd
      • Re: (Score:3, Funny)

        by cerberusss (660701)

        It does you no good to be able to process 16 billion pixels / second, when you can only get the data for 4 billion per second from your memories.

        [voice accent="scotty"]
        It's the memories, capt'n! They canna' take it any longer!
        [/voice]

    • by Ideaphile (678292) on Sunday August 24, 2008 @03:39PM (#24728679)
      Although I appreciate the attention from NVIDIA and Slashdot, I can't support that alleged quote from my blog (http://speedsnfeeds.com).

      First, what's being described as a quote is actually just John Montrym's summary from my original post, which is here:

      http://news.cnet.com/8301-13512_3-10006184-23.html [cnet.com]

      What I actually described as equating to "the performance of a 2006-vintage... graphics chip" was a performance standard defined by Intel itself-- running the game F.E.A.R. at 60 fps in 1,600 x 1,200-pixel resolution with four-sample antialiasing.

      Intel used this figure for some comparisons of rendering performance. If Larrabee ran at 1 GHz, for example, Intel's figures show that it would take somewhere from 7 to 25 Larrabee cores to reach that 60 Hz frame rate.

      Larrabee will probably run much faster than that, at least on desktop variants.

      Well... rather than writing the whole response here, I think I'd rather write it up for my blog and publish it there. Please surf on over and check it out:

      http://news.cnet.com/8301-13512_3-10024280-23.html [cnet.com]

      Comments are welcome here or there.

      . png
  • by Anonymous Coward on Sunday August 24, 2008 @08:55AM (#24725627)
    No wonder it's so slow. He keeps making reference to how it paints things. Can't move on to another frame until the previous one has dried.
  • by Joce640k (829181) on Sunday August 24, 2008 @08:57AM (#24725633) Homepage

    So why is NVIDIA on the defensive?

    Intel is aiming at number crunchers (note that their chip uses doubles, not floats). They don't want NVIDIA to steal that market with CUDA.

    When Intel says "graphics", they mean movie studios, etc.

    If Larrabee eventually turns into a competitor for NVIDIA, all well and good, but that's not their goal at the moment.

    • by Shinatosh (1143969) on Sunday August 24, 2008 @09:11AM (#24725693)
      Ok. Nvidia and AMD/ATI probably will overperform the Intel GPU. However Intel has open specs of their GPU-s, so for NOT the gamers there will be a quite good performance GPU to be used under Linux for various purposes with high quality OSS drivers. Im looking forward to it.
      Shinatosh
    • by Nymz (905908) on Sunday August 24, 2008 @09:16AM (#24725715) Journal

      So why is NVIDIA on the defensive?

      I think the Nvidia people think pretty highly of themselves, rightfully so, and Intel has recently been making a number of bold claims, without backing them up. In a poker analogy, Nvidia is calling Intels bluff.

      • by sortius_nod (1080919) on Sunday August 24, 2008 @11:26AM (#24726393) Homepage

        I wouldn't call it that.

        I'd call it a knee-jerk reaction to a non-issue.

        Nvidia are getting very scared now that ATi are beating them senseless. I run both ATi and Nvidia, so don't go down the "you're just a fanboy" angle either.

        I've seen chip makers come and go, this is just another attempt by Nvidia to try and sure up support for their product, but this time they can't turn to ATi and say "look how crap their chips are" - they have to do it to Intel who are aiming the chips at corporate markets.

        To be honest, the best bang for buck at the lower end of the market for 2D seems to be the Intel chips. One thing that does tend to surprise people is the complete lack of performance that the Nvidia chipsets have when not in 3D. ATi don't seem to have these problems having built around a solid base of 2D graphics engines in the 90's (Rage/RageII is at least one reason why people went with Macs back then). Nvidia is really feeling the pinch with ATi taking up the higher end of the market (pro-gear/high end HD) and intel suring up the lower end (GMA, etc). Nvidia pretty much are stuck with consumers buying their middle of the line gear (8600/9600).

        When you aim high you tend to hurt real when you fall from grace, the whole 8800 to 9800 leap was abysmal at best unlike their main competitor who really pulled their finger out to release the 3xxx & 4xxx series.

        All in all this seems like a bit of pork barrelling on Nvidia's part to detract from the complete lack of performance in their $1000 video card range. If anything this type of bullshit will be rewarded with a massive consumer (yes, geek and gamer) backlash.

        I know my products, I know their limitations - I don't need some exec talking crap to tell me, and base level consumers will never read it.

        • by Sj0 (472011) on Sunday August 24, 2008 @01:36PM (#24727393) Homepage Journal

          Um...

          Correct me if I'm wrong, but are you going around talking about the performance of 2d parts?

          In 2008?

          Seriously?

          Are you sure you want to do that?

          I mean, you'll look stupid.

          Seriously. A straight framebuffer device can pretty much update at full speed without using a fraction of the PCI-E bus or a fraction of the processing power of a modern CPU.

          • by ThisNukes4u (752508) * <.moc.liamg. .ta. .ippoct.> on Sunday August 24, 2008 @04:54PM (#24729423) Homepage
            2d performance is more than just how fast you can refresh a framebuffer from memory. Check out x11perf -aa10, which tests drawing 10pt anti-aliased fonts. My radeon 9250 with open source drivers gets about a 2x better score than my brand new 4850 with fglrx. The difference is that ati/amd (and nvidia as well) don't spend nearly as much time optimizing these parts of the driver(considered "2d" but they really use the 3d engine) while you need hardware acceleration and driver support to do it at a good speed(which the open source r200 driver does, even faster than pure software on my not too sluggish phenom 9950).
        • by mabhatter654 (561290) on Sunday August 24, 2008 @08:59PM (#24731455)

          Unlike Nvidia and AMD, intel can bundle whatever it wants and cover the costs in CPU and chipset manufacturing. If intel wants, they can give away whatever specs needed in order to corner the market, a la Microsoft and bend the software industry to it's will. In some way even AMD can bundle GPU and CPUs in value packs but Nvidia is out cold for now and that scares the piss out of them.

        • Re: (Score:3, Informative)

          by MojoStan (776183)

          Nvidia are getting very scared now that ATi are beating them senseless.

          Nvidia is really feeling the pinch with ATi taking up the higher end of the market (pro-gear/high end HD) and intel suring up the lower end (GMA, etc). Nvidia pretty much are stuck with consumers buying their middle of the line gear (8600/9600).

          When you aim high you tend to hurt real when you fall from grace, the whole 8800 to 9800 leap was abysmal at best unlike their main competitor who really pulled their finger out to release the 3xxx & 4xxx series.

          I guess you're referring to AMD/ATI's successful HD 4000 launch about a month ago, but you also seem to be omitting NVIDIA's GTX 200 series. At the high end (non-workstation), the GTX 280 outperforms the HD 4870 [arstechnica.com]. The $550 HD 4870 X2 (released about two weeks ago) outperforms the $450 GTX 280, but consumes a heck of a lot more power [anandtech.com].

          Also, NVIDIA seems to have been beating AMD/ATI senseless for years. According to Jon Peddie Research [jonpeddie.com], for total graphics chips, NVIDIA had 31.4% market share in Q2 2008 vs. AM

    • Re: (Score:2, Insightful)

      by Ant P. (974313)

      So why is NVIDIA on the defensive?

      They're substituting rhetoric for an actual competitive product. Right now they're crapping their pants because they gambled everything on Vista which is failing spectacularly, whereas both ATi and Intel have got a 2 year head start on supporting Ubuntu out of the box. You can say Ubuntu isn't linux, but it's what all those Dell buyers are going to see.

      • Re: (Score:2, Informative)

        by sammyF70 (1154563)
        you should get your facts straight : ATI's linux drivers are atrocious (might have changed in the last 6 months or so, but I wouldn't bet on it) Between the two, only nVidia has halfway good drivers for their products.
        • ATI recently released specs for their R600 chips.

          Their driver might suck big time - as it's open source counter-part - yet in long term, ATI has huge advantage right now. In my eyes, sincere Linux support is huge advantage - though I game exclusively on Windows.

          Needless to say, that dialog ATI had established with its Linux users and OSS developer community would also contribute positively to their proprietary drivers.

          Both Intel and nVidia - proprietary driver companies - should be on defensive ri

          • Re: (Score:3, Interesting)

            by sammyF70 (1154563)
            Yes. I know about ATI releasing the specs, which is why I said it might have gotten better now, though I guess it's going to be some time before we see anything happen (but it probably will)
            • Re: (Score:3, Insightful)

              by Curtman (556920)

              I know about ATI releasing the specs, which is why I said it might have gotten better now, though I guess it's going to be some time before we see anything happen (but it probably will)

              I hope so, but Intel has released comprehensive driver docs [intellinuxgraphics.org] for a long time, and their driver still sucks.

          • by CajunArson (465943) on Sunday August 24, 2008 @11:54PM (#24732603) Journal

            Both Intel and nVidia - proprietary driver companies - should be on defensive right now.
            You obviously don't know much about Intel's commitment to open-source in its drivers. Instead of recently dumping some of the specs to its chips to the community at large, Intel has actively paid developers to maintain top-quality 100% FOSS drivers for Linux and X11 for years, making its commitment light-years ahead of ATI or Nvidia. Ever hear of Keith Packard, you know, the leading developer of the entire X system? He's an Intel employee. For all the accolades that ATI gets for dumping a bunch of specs on the web, Intel has put vastly more time & money into supporting OSS, but still gets labeled as "closed source" by fanboys.

      • by TheRaven64 (641858) on Sunday August 24, 2008 @10:11AM (#24725979) Journal
        Uh, nVidia has supported Linux, FreeBSD and Solaris for some years now. The drivers are binary-only, which makes them unacceptable to some people (I'd prefer not to run them, personally), but if you're buying a Dell with Ubuntu or a Sun with Solaris on it you can easily use an nVidia GPU and get the same sort of performance you would from Windows.
      • both ATi and Intel have got a 2 year head start on supporting Ubuntu out of the box. You can say Ubuntu isn't linux, but it's what all those Dell buyers are going to see.

        I was not aware that there are specific "Ubuntu" drivers. Can you confirm that is what you mean - otherwise that last sentence does not make sense to me.

        • The nVidia drivers are binary only, so they are not available in the standard source repositories and are not compiled and included by default in most opensource distribution.

          Ubuntu has made the necessary arrangement and provides, out-of-the-box a tool that can automatically download and install binary drivers from within the usual setup tool.

          It think that's why the parent poster may refer to.

          That means that, instead of having to manually download a package and execute it (from the command line) - which isn

      • by Kjella (173770) on Sunday August 24, 2008 @11:16AM (#24726321) Homepage

        zquote>Right now they're crapping their pants because they gambled everything on Vista which is failing spectacularly, whereas both ATi and Intel have got a 2 year head start on supporting Ubuntu out of the box.

        Traditionally (before the AMD buyout) ATI has had terrible support under Linux, nVidia has been delivering their binary drivers and they have in my experience been easy to install, stable and fully functional much longer than ATI. By the way, the latest 177.68 drivers should now have fixed the recent KDE4 performance issues. From what I've understood the fglrx (closed source) ATI driver has made considerable progress so maybe now they're equal, but closed vs closed source nVidia got nothing to be ashamed of. For exampel, ATI just this month added CrossFire support while SLI has been supported since 2005, that's more like three years behind than two years ahead.

        Of course, ATI is now opening up their specs but it's going slowly. For example, they have not yet recieved [phoronix.com] the 3D specs on the last generation R600 cards, much less the current generation. And after those specs are released some very non-trivial drivers must be written as well, meaning it could take another few years before we see fully functional open source drivers. Also this strategy is less than a year old, so if they're two years ahead they're moving fast. Nothing of this is something that should make nVidia executives the least bit queasy.

        They are crapping their pants because ATI has delivered two kick-ass parts in the 4850 and 4870, and there's very little reason to buy anything else these days. They are crapping their pants because Intel won't give them a Nehalem system bus license. They're crapping their pants because the 800lb gorilla that's Intel is entering a market everyone else has been leaving. They're crapping their pants because the US economy is shaky and people might not spend big $$$ on graphics cards. But over Linux support? Put it into perspective, and you'll see it's an extremely minor issue.

      • I wet fart at most.

        NVIDIA may be crapping their pants for many reasons, but Ubuntu support isn't one of them.

      • Re: (Score:3, Informative)

        by Almahtar (991773)
        Interestingly enough, ATI seemed to pander more to Vista than nVidia. When DX10's driver compatibility requirements still included memory virtualization, ATI managed to work that into their drivers after a lot of time and $.

        nVidia never did - they finally said "screw it, we can't.", after which Microsoft removed that requirement. Interestingly enough that means that there's no real barrier between DX10 being ported to XP, but we'll leave that alone for now.

        Meanwhile for the last few years in my experie
    • Re: (Score:3, Insightful)

      "movie studios"? Yeah, I'm sure Intel is putting a GPU in every Intel CPU (80% of the desktop market) just to make a couple of companies happy.

      Why wouldn't Intel want to be a Nvidia's competitor? Intel has been the top seller of graphic chips, more than nvidia or ati, for some years. I'd say that they have been competing for a looong time now.

      Intel has been very succesful with their integrated graphic chips because most of people on the world only need a chip that can draw Windows. Apparently, now they want

    • Re: (Score:3, Informative)

      by Excors (807434)

      Intel is aiming at number crunchers (note that their chip uses doubles, not floats).

      That's not true. From their paper [intel.com]:

      Larrabee gains its computational density from the 16-wide vector processing unit (VPU), which executes integer, single-precision float, and double-precision float instructions.

      And it's definitely aimed largely at games: the paper gives performance studies of DirectX 9 rendering from Half Life 2, FEAR and Gears of War.

    • by Forkenhoppen (16574) on Sunday August 24, 2008 @10:24AM (#24726049)

      It's obvious that the graphics angle is really just a Trojan horse; they're using graphics as the reason to get it into the largest number of hands possible, but what they really want to do is to keep people writing for the X86 instruction set, rather than OpenCL, DirectX 11, or CUDA. Lock-in with the X86 instruction set has served them too well in the past.

      In other words, general compute was an area where things were slipping out of their grasp; this is a means to shore things up.

      It's a sound business strategy. But I have to agree with blogger-dude; I don't see them being overly competitive with NVIDIA and AMD's latest parts for rasterized graphics anytime soon.

    • by kripkenstein (913150) on Sunday August 24, 2008 @11:11AM (#24726287) Homepage
      No, Intel has been very clear that it is targeting games, even saying "we will win" about them, see this interview with an Intel VP [arstechnica.com].

      NVidia is on the defensive for the simple reason that it needs to be. Not because Intel has a product that threatens NVidia, but because Intel is using classic vaporware strategies to undermine NVidia (and AMD/ATI). Intel is basically throwing around promises, and by virtue of its reputation a lot of people listen and believe those promises. With 'amazing Intel GPU technology just around the corner', some people might delay buying NVidia hardware. NVidia is trying to prevent that from happening.
  • by Anonymous Coward

    At least Intel documents their hardware. Fuck NVIDIA and their stupid proprietary hardware!

    Glass

    • Re: (Score:3, Informative)

      by FoolsGold (1139759)

      NVIDIA's proprietary hardware is what's capable of playing the games I want to play at the frame rate and quality I want.

      For goodness sake already, why won't people stop being so ideological and just USE the damn hardware if it works better than the alternative. Pick what you need from a practical viewpoint, NOT on ideology. Life's not worth wasting one's efforts of the ideology of a fucking graphics chipset already!

      • by MrMr (219533) on Sunday August 24, 2008 @09:58AM (#24725899)
        Sorry, but I did exactly that, and got bitten recently: NVidia's drivers for old graphics cards lag behind more and more. I can no longer update one of my systems because the ABI version for their GLX doesn't get updated.
        The fix would be trivial (just recompile the current version), but Nvidia clearly would rather sell me a new card.
        • I feel your pain, cos when I was running a desktop machine with an NVIDIA card I had problems of my own (eg. standby not resuming properly, occasional glitches, etc). However, what pisses me off is when people assume that open-source drivers such as the Intel drivers are somehow better. Shit, the Intel graphics drivers in Linux don't properly support all the GL extensions that the Windows drivers so, there's a documented but as of yet unfixed issue with the gamma levels when using XV for video playback, I'v

        • Re: (Score:3, Informative)

          by owlman17 (871857)

          I'm using a 71.86.04 driver (released last January 2008) for my Riva TNT2 on an old PII-366. Works pretty fine on a vanilla 2.4.x kernel.

          • by MrMr (219533)
            Don't upgrade X.org or you'll be out of luck. But then, I don't think there is a distro with 2.4 kernels that even uses that fork.
      • Pick what you need from a practical viewpoint, NOT on ideology.

        But Free software is practical. If it breaks, you can fix it. With closed source, you have to rely on someone else to fix it for you.

        • Re: (Score:2, Informative)

          by Anonymous Coward

          And for those of us who don't code and do other things for a living, I guess we are just shit of of luck then?

          I spend my money on hardware and drivers. In other words, I'm paying someone else to do it right.

          • by Jeremy Erwin (2054) on Sunday August 24, 2008 @12:39PM (#24726913) Journal

            Linux support done right is a well written GPL-compliant driver.

        • by Belial6 (794905)
          Oh, I rely on someone else to fix my Free software. The difference anybody with the skill and inclination can fix it. With closed source, only a specific person with the skill and inclination can fix it. My chances with a bunch of people are much, MUCH better than my chances with a specific person to have the inclination.
      • Circa 1980, MIT AI lab...

        "Xerox's 9700 laser printer prints the documents I want to print with the speed and quality that I want them printed."

        "For goodness sake already, why won't people stop being so ideological and just USE the damn hardware if it works better than the alternative. Pick what you need from a practical viewpoint, Not on ideology. Life's not worth wasting one's efforts of the ideology of a fucking laser printer already!"



        History [faifzilla.org]. Remember, what you choose today might just change wha
      • Re: (Score:3, Insightful)

        by TheRaven64 (641858)
        Using binary drivers makes it incredibly easy for old hardware to be orphaned. It's in nVidia's interest to encourage you to buy new hardware. They can do this by not supporting older drivers, and if you use something like Linux without a stable driver ABI (or even API) then you have a choice between using an old kernel or not using your GPU. They can introduce stability improvements in the old drivers that cause subtle performance degradation making you think it's time to upgrade. Or they might decide
    • Intel? documents??

      They have published few specs - only after number of on-line petitions and PR harassments.

      As far as specs go, nVidia in some respect is less hated than Intel: later already has greater history of keeping everything confidential, sometimes not sharing even with partners.

      That's of course different in markets Intel trying to enter right now e.g. telecom: they are very nice and polite, often sending updated specs to you even without asking.

      But as desktop market concerned, make no

  • by Kneo24 (688412) on Sunday August 24, 2008 @09:05AM (#24725665) Homepage

    "OH MY GOD! CPU AND GPU ON ONE DIE IS STOOOOOOOOPIIIIIDDDDDEDEDDDD!!!1111oneoneone"

    How stupid is it really? So what if the average consumer actually knows very little about their PC. That doesn't necessarily mean it won't be put into a person's PC.

    If they were really forward thinking, they could see it as an effort to bridge the gap between low-end PC's and high-end PC's. Now maybe, at some point in the future, people can do gaming a little better on those PC's.

    Instead of games being nigh unplayable, are now running slightly more smoothly. With advance in this design, it could really work out better.

    Sure, for the time being, I don't doubt that the obvious choice would be to have a discrete component solution for gaming. However, there might be a point where that isn't in the gamers best interests anymore. I'm not a soothsayer, I don't know.

    Still, I can't only help but imagine how Intel's and AMD's ideas can only help everyone as a whole.

    • The one thing I see is that it's merging the two different product types that have different rates of advancement. I would think that might not be such a good idea. This might make the performance gulf between typical systems and gaming systems a lot larger.

    • by Lonewolf666 (259450) on Sunday August 24, 2008 @10:48AM (#24726175)

      I think Fusion is an alternative to current integrated graphics, not to separate high-performance GPUs. After all it shares the drawback of having to steal bandwidth from the regular RAM, where discrete graphics cards have their own memory.

      For a moderately power-hungry graphics chip (think 20 watt) the advantage is that Fusion can share the CPU cooler with a moderately power-hungry CPU, while integrating the GPU elsewhere on the board will require a separate cooler. That takes extra board area and money.
      So I think Fusion might be able to perform somewhat better for the same price than other integrated graphics. Which means it threatens the widely used Intel boards with integrated graphics rather than NVidia.

      To mention something else (but still mostly on topic):
      In the Linux market, AMD is currently building a lot of goodwill by providing documentation and Open Source driver code. That might become an advantage over NVidia too.

      • I'm not intending to contradict anything you've said because I agree with it all! However, thinking of the long term I can remember when the available integrated 2D + 3D graphics cards were a convenient (and cheaper) shortcut but not as good as having separate 2D and 3D cards. I feel old remembering this :-) I do wonder if the future will produce yet more integration here, though!
    • Right now GPUs cannot be used widely by software because they are relatively expensive and support is sparse.

      The point is to integrate the GPU functions deeper into system, allowing cheap low-end integrated boards to also have GPUs.

      nVidia tries hard to keep the GPU acceleration exclusive to high-end. Intel and AMD/ATI want it to hit low-end - the market where most money are.

  • by HonkyLips (654494) on Sunday August 24, 2008 @09:09AM (#24725681)
    A recent journal article on ArsTechnica points to an Intel blog on Larrabee: http://arstechnica.com/journals/hardware.ars/2008/05/01/larrabee-engineer-on-personal-blog-larrabee-is-all-about-rasterization [arstechnica.com] Curious.
  • by propanol (1223344) on Sunday August 24, 2008 @09:29AM (#24725755)

    Ten years ago you would see Nvidia GPUs in everything from low- to high-end. Today, not so much - Intel dominates the low-end spectrum, with ATI hanging onto a somewhat insignificant market share. The Larrabee is Intel moving upmarket. Sure, it might not perform as well the latest Nvidia or ATI high-end GPU but it might be enough in terms of performance or have other benefits (better OSS support) to win some of Nvidia's current market share over. Considering it's supposedly the Pentium architecture recycled, it's also reasonable to assume the design will be relatively cost-effective and allow Intel to sell at very competitive prices while still maintaining healthy profit margins.

    It's a classic case of disruption. Intel enters and Nvidia is happy to leave because there's a segment above that's much more attractive to pursue. Continue along the same lines until there's nowhere for Nvidia to run, at which point the game ends - circle of disruption complete. See also Silicon Graphics, Nvidia's predecessor in many ways.

    • by eddy (18759) on Sunday August 24, 2008 @10:17AM (#24726009) Homepage Journal

      >ATI hanging onto a somewhat insignificant market share.

      C'mon, 17 million units shipped in a quarter and ~20% of the market is hardly 'a somewhat insignificant market share' in a market with four major players (Intel, nVidia, VIA).

      For comparison, take Matrox, they have insignificant market share with about 100K/q

      • Re: (Score:3, Insightful)

        by pavon (30274)

        No, he is saying that ATI has an insignificant portion of the low-end market, which is true. Both ATI and NVIDIA cards are now seen as upgrades to the default Intel chipset in practically every laptop sold, whereas in the past they provided both the low-end and high-end cards for that market.

    • Re: (Score:3, Insightful)

      by Calinous (985536)

      Intel always was the biggest graphic chips provider - and I don't think was ever below 40% of the total market (by numbers at least). With all their expensive and cheap graphic chips, ATI and NVidia were unable to dethrone Intel's integrated graphic division.

    • by cnettel (836611) on Sunday August 24, 2008 @11:15AM (#24726307)
      Ten years ago, the Riva TNT was yet a few months away. S3 and ATI both had a great marketshare for low to mid-end, and 3dfx dominated the very top segment for gamers.
  • ...and ATI/AMD easily bests nVidia. Somehow, I'm not surprised.

  • Because the drivers are open source and work out the box on every modern Linux distro.

    I like my compiz eye-candy and Intel delivers more than enough performance for it.

  • If, in the future, the trend evolves that all gpu's are integrated.

    Intel, nvidia, AMD and ATI...

    Who is the odd one out there?

  • by Patoski (121455) on Sunday August 24, 2008 @09:40AM (#24725807) Homepage Journal

    Lots of people here and analysts have written off AMD. I think AMD is in a great position if they can survive their short term debt problems which is looking increasingly likely.

    Consider the following:

    • Intel's GPU tech is terrible.
    • Nvidia doesn't have an x86 design / manufacturing experience, x86 license, or even x86 technology they want to announce.
    • AMD currently has the best GPU technology and their technology is very close to Intel's for CPUs.

    AMD is in a great position like no other company to capitalize on the coming CPU / GPU convergence. Everyone jeered when AMD bought ATI but it is looking to be a great strategic move if they can execute on their strategy.

    AMD has the best mix of technology, they just have to put it to good use.

    • by Hektor_Troy (262592) on Sunday August 24, 2008 @10:02AM (#24725927)

      Well, if you read the reviews, AMDs integrated graphics sollution 780g kicks ass. Only the very very newest Intel integrated chipset is slightly better, but that uses around 20W compared to AMD's chipset's 1W

    • Re: (Score:3, Interesting)

      by Bender_ (179208)

      Just looking at this from a manufacturing side:

      AMD is roughly two years behind Intel in semiconductor process technology. Due to this and other reasons (SOI, R&D/SGAA vs. revenue) they are in a very bad cost position. Even if they have a better design, Intel is easily able to offset this with pure manufacturing power.

      The playground is more level for Nvidia vs. ATI since both rely on foundries.

      It's tough to tell whether ATI/AMD will be able to capitalize on this situation. They are very lucky to have a n

      • by ZosX (517789)
        This is certainly true as well. When you manufacture on a smaller fabrication process you can make more CPUs with the same material. Intel spent billions to stay ahead of the curve and it has enabled them to solidify their position. AMD is playing catchup in a serious way and what early gains they made (x86-64, multi-core, low power) have been adopted by intel. Don't worry amd is not going anywhere. Intel will always need a second x86 chip manufacturer in the market to avoid further anti-trust litigation
    • by TheRaven64 (641858) on Sunday August 24, 2008 @10:21AM (#24726029) Journal

      Nvidia doesn't have an x86 design / manufacturing experience, x86 license, or even x86 technology they want to announce

      True. They do, however, have an ARM Cortex A8 system on chip, sporting up to four Cortex cores and an nVidia GPU in a package that consumes under 1W (down to under 250mW for the low-end parts). Considering the fact that the ARM market is currently an order of magnitude bigger than the x86 market and growing around two-three times faster, I'd say they're in a pretty good position.

      • by serviscope_minor (664417) on Sunday August 24, 2008 @01:10PM (#24727135) Journal

        True. They do, however, have an ARM Cortex A8 system on chip, sporting up to four Cortex cores and an nVidia GPU in a package that consumes under 1W (down to under 250mW for the low-end parts). Considering the fact that the ARM market is currently an order of magnitude bigger than the x86 market and growing around two-three times faster, I'd say they're in a pretty good position.

        True, but the ARM market also has many more players. ARM will license their core to anyone, so you have Intel VS AMD VS VIA in x86 land and TI vs Philips (NXP now -- I LOVE this chip) VS Marvell (not a licensee, but they have a crummy chip for free) VS NVIDIA? VS Analog (that's kind of funny) VS IBM VS Fujitsu VS Freescale VS STM VS Cirrus VS Atmel VS Broadcom VS Nintendo VS Sharp VS Samsung VS ... VS there's probably even Xilinx in there for good measure.

        So, the market is larger, but the competition is stiffer.

        That said, if they made an EEE like machine with NVidia's graphics and 4x cortex cores, I'd buy one.

      • Re: (Score:3, Insightful)

        by Big Jojo (50231)

        ARM Cortex A8 system on chip, sporting up to four Cortex cores ...

        Shouldn't that be four Cortext A9 chips, on the grounds that A9 has SMP support but not A8?

        That said ... the reason the ARM market is so big is that it goes after a lower power market, with cell phones and other battery powered gadgets being the canonical example. That market has not yet felt a real need for SMP. So even if such an NVidia chip gets off the ground, it's unclear how much it would sell.

        The first widely available ARM Cortex

    • With VIA going and nVidia rumored to stop developing chipsets (at least it won't make it any easier for AMD/ATI even if they continue), AMD is missing someone to develop and manufacture good motherboard chipsets.
      • Spider platform (Score:3, Informative)

        by DrYak (748999)

        AMD is missing someone to develop and manufacture good motherboard chipsets.

        Haven't been following the news recently ?!?

        ATI/AMD latest serie of chipsets (the 790) is quite good. That's the reason why VIA announced dropping that market in the first place.

        The only problem is that currently, nVidia's SLI is a proprietary technology requiring licensing. So that's why a lot of player still buy nvidia's chipsets and avoid ATI's - not that these are bad, on the contrary, but they only lack the license required for SLI.

        This SLI problem is also explaining why nVidia may have to consider sto

    • by ZosX (517789) <zosxavius@NosPam.gmail.com> on Sunday August 24, 2008 @10:26AM (#24726057) Homepage
      Nvidia does indeed have license to x86. They acquired it when they bought all of 3dfx's intellectual property. They in fact manufacture a 386SX clone. Rumors have been persisting that they are looking to enter the x86 market. It should be noted that they are still relative outsiders in that their licensing doesn't extend into the x86-64 instruction set, which is taking over the market now.
    • Re: (Score:2, Interesting)

      by fast turtle (1118037)

      Lots of people here and analysts have written off AMD. I think AMD is in a great position if they can survive their short term debt problems which is looking increasingly likely.

      Everyone forgets that AMD purchase ATI not for their GPU line (bonus) but for their North/South Bridge chipsets as it offered them the ability to finally provide a complete solution as Intel Does. For example - Asus only designs the PCB used in their Motherboard while using either an Intel chipset (Socket 775/ICH9/GMA3100) or an AMD (Socket AM2) with either an ATI northbride or the Nvidia Nforce Chipset. The only thing AMD gets from this is the Socket and CPU sell, while ATI or Nvidia gets the rest of the b

  • by Anonymous Coward

    Company says competitor's product sucks! News at 11.

  • of a Geforce 8 series(which came out in 2006)???

    EXCELLENT!

    Thanks AMD for suggesting intel, Im gonna save lots of money on my next motherboard by not needing an nvidia graphics card!

  • What bullshit. (Score:5, Interesting)

    by Anonymous Coward on Sunday August 24, 2008 @09:52AM (#24725859)

    From the SIGGRAPH paper they need something like 25 cores to run GoW at 60Hz. That's 1Ghz cores for comparison though. LRB will probably run at something like 3Ghz, meaning you only need like 8-9 cores to run GoW at 60, and with benchmarks stretching up to 48 cores you can see that this has the potential of being very fast indeed.

    More importantly, the LRB has much better utilization since there aren't any fixed function divisions in the hardware. E.g. most of the time you're not using the blend units. So why have all that hardware for doing floating point maths in the blending units when 99% of the time you're not actually using it? On LRB everything is utilized all the time. Blending, interpolation, stencil/alpha testing etc. is all done using the same functionality, meaning that when you turn something off (like blending) you get better performance rather than just leaving parts of your chip idle.

    I'd also like to point out that having a software pipeline means faster iteration, meaning that they have a huge opportunity to simply out-optimize nvidida and amd, even for the D3D/OGL pipelines.

    Furthermore, imagine intel suppyling half a dozen "profiles" for their pipeline where they optimize for various scenarios (e.g. deferred rendering, shadow volume heavy rendering, etc. etc.). The user can then try each with their games and run each game with a slightly different profile. More importantly, however, is that new games could just spend 30 minutes figuring out which profile suits them best, set a flag in the registry somewhere, and automatically get a big boost on LRB cards. That's a tiny amount of work to get LRB-specific performance wins.

    The next step in LRB-specific optimizations is to allow developers to essentially set up a LRB-config file for their title with lots of variables and tuning (remember that LRB uses a JIT compiled inner-loop that combines the setup, tests, pixel shader etc.). This would again be a very simple thing to do (and intel would probably do it for you if your title is high profile enough), and could potentially give you a massive win.

    And then of course the next step after that is LRB-specific code. I.e. you write stuff outside D3D/OGL to leverage the LRB specifically. This probably won't happen for many games, but you only need to convince Tim Sweeney and Carmack to do it, and then most of the high profile games will benefit automatically (through licensing). My guess is that you don't need to do much convincing. I'm a graphcis programmer myself and I'm gagging to get my hands on one of these chips! If/when we do I'll be at work on weekends and holidays coding up cool tech for it. I'd be surprised if Sweeney/Carmack aren't the same.

    I think LRB can be plenty competitive with nvidia and amd using the standard pipelines, and there's a very appealing low-fricion path for developers to take to leverage the LRB specifically with varying degrees of effort.

    • by ZosX (517789)
      You forgot to mention that these will likely be installed on a huge portion of the desktop market. Better performance than intel's past offerings on the integrated market will be extremely welcome to game developers and who doesn't want to expand their market?
    • Re:What bullshit. (Score:4, Interesting)

      by Anonymous Coward on Sunday August 24, 2008 @01:44PM (#24727479)

      Some notes from Tim Sweeney in a discussion on this:

      "Note that the quoted core counts for AMD and NVIDIA are misleading.

      A GPU vendor quoting a "240 cores" is actually referring to a 15-core chip, with each core supporting 16-wide vectors (15*16=240). This would be roughly comparable to a 15-core Larrabee chip.

      Also keep in mind, a game engine need not use an architecture such as this heterongeneously. A cleaner implementation approach would be to compile and run 100% of the codebase on the GPU, treating the CPU solely as an I/O controller. Then, the programming model is homogeneous, cache-coherent, and straightforward.

      Given that GPUs in the 2009 timeframe will have multiple TFLOPs of computing power, versus under 100 GFLOPS for the CPU, there's little to lose by underutilizing the CPU.

      If Larrabee-like functionality eventually migrates onto the main CPU, then you're back to being purely homogeneous, with no computing power wasted.

      I agree that a homogeneous architecture is not just ideal, but a prerequisite to most developers adopting large-scale parallel programming.

      In consumer software, games are likely the only applications whose developers are hardcore enough to even contemplate a heterogeneous model. And even then, the programming model is sufficiently tricky that the non-homogeneous components will be underutilized.

      The big lesson we can learn from GPUs is that a powerful, wide vector engine can boost the performance of many parallel applications dramatically. This adds a whole new dimension to the performance equation: it's now a function of Cores * Clock Rate * Vector Width.

      For the past decade, this point has been obscured by the underperformance of SIMD vector extensions like SSE and Altivec. But, in those cases, the basic idea was sound, but the resulting vector model wasn't a win because it was far too narrow and lacked the essential scatter/gather vector memory addressing instructions.

      All of this shows there's a compelling case for Intel and AMD to put Larrabee-like vector units future mainstream CPUs, gaining 16x more performance on data-parallel code very economically.

      Tim Sweeney
      Epic Games"

  • Regardless of Larrabee being crap performance when it's released in 2010, this is a step in the right direction even if Intel doesn't know how to make very good graphics cards. In time, I'm sure, there won't be a difference between CPU/GPU, and all the memory will be shared (for the majority of systems). We'll be looking back saying "wow, why would anyone want that extra memory from the graphics card just sitting there while my system has maxed it's main physical memory?". If Intel drops into the graphics m

  • by S3D (745318) on Sunday August 24, 2008 @10:41AM (#24726135)
    With all OpenGL extensions supported working properly, to latest and greatest from NVIDIA where I can never be sure which extension work on which driver with which card.
  • so... (Score:2, Funny)

    by 800DeadCCs (996359)

    So intel will only be about 4 years behind current in their graphics system when it comes out.
    In that case, it's probably the biggest leap they'll have ever made.

  • FTA:

    He [Mottram] also predicted that ATI would regret its focus on raw graphical power at the expense of more general-purpose capabilities.

    Funny how in the 9500pro's release there was such a focus on not using raw video horsepower to draw frames, but use occlusion and other things to save time/memory/bandwidth and increase speed in rendering.

    Versus the Gforce line of depending on raw horsepower and drawing everthing in a scene.

    Brute force or thinking ahead?

    I have to admit, I like the idea that Kneo24 had up

  • From 2006? (Score:2, Interesting)

    Didn't the 8800 series come out at the end of 2006? The first gen 8800GTS 640MB and the 8800GTX 768MB those are still powerful video cards by today's standards.... so if Larrabee is "a GPU from 2006" then isn't that a compliment to Intel?
    • Re: (Score:3, Interesting)

      by Carbon016 (1129067)
      I think we're talking more the 8600 esque chips. The only reason these older cards are performing well within spec to the newer ones is because app writers (esp. games) are focusing very heavily on optimization, counter to what the "PC GAMING IS DEAD" trolls might make you believe. Call of Duty 4 runs at an extremely high frame rate on a 6600GT, for christ's sake.
  • by simplerThanPossible (1056682) on Sunday August 24, 2008 @12:37PM (#24726901)

    As Moore's law makes silicon cheaper, what are you going to do with it? more cache, more cores... why not a GPU? Concurrent software to utilize multi-cores is not yet mainstream (maybe never), so that leaves cache and GPU.

    In a way, the existence of separate GPUs is just a sign that the CPU wasn't powerful enough to deliver the graphics the market wanted (and would pay for). When CPU's are powerful enough (clock speed or multi-core), they'll subsume the GPU, as they did maths co-processors and cache. ie. The silicon would be partitioned into CPU, GPU and cache - but it would all be on the one chip (called the "CPU" no doubt).

    Intel already owns a fair bit of the integrated graphics market. They have great access to channels. Even if this is only half as good as a separate GPU, they will increase market share. I can't see what could stop them... except maybe a patented technology that can't be worked around. Some manufacturers of separate GPU's will survive in specialized niches. Some.

  • "It's an incremental expense, not a linear function. It's cheaper to separate them.""

    Here is one area in which NVIDIA clearly has the upper hand over Intel. They apparently have figured out how to perform incremental [reference.com] functions in a non-linear fashion!

    Great. Just 13 years after finally proving Fermat's Last Theorem [wikipedia.org], NVIDIA throws a new challenge at the Mathematicians! Now they'll never get any real work done ...

Aren't you glad you're not getting all the government you pay for now?

Working...