Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Nvidia Claims Intel's Larrabee Is "a GPU From 2006" 278

Barence sends this excerpt from PC Pro: "Nvidia has delivered a scathing criticism of Intel's Larrabee, dismissing the multi-core CPU/GPU as wishful thinking — while admitting it needs to catch up with AMD's current Radeon graphics cards. 'Intel is not a stupid company,' conceded John Mottram, chief architect for the company's GT200 core. 'They've put a lot of people behind this, so clearly they believe it's viable. But the products on our roadmap are competitive to this thing as they've painted it. And the reality is going to fall short of the optimistic way they've painted it. As [blogger and CPU architect] Peter Glaskowsky said, the "large" Larrabee in 2010 will have roughly the same performance as a 2006 GPU from Nvidia or ATI.' Speaking ahead of the opening of the annual NVISION expo on Monday, he also admitted Nvidia 'underestimated ATI with respect to their product.'"
This discussion has been archived. No new comments can be posted.

Nvidia Claims Intel's Larrabee Is "a GPU From 2006"

Comments Filter:
  • by Kneo24 ( 688412 ) on Sunday August 24, 2008 @09:05AM (#24725665)

    "OH MY GOD! CPU AND GPU ON ONE DIE IS STOOOOOOOOPIIIIIDDDDDEDEDDDD!!!1111oneoneone"

    How stupid is it really? So what if the average consumer actually knows very little about their PC. That doesn't necessarily mean it won't be put into a person's PC.

    If they were really forward thinking, they could see it as an effort to bridge the gap between low-end PC's and high-end PC's. Now maybe, at some point in the future, people can do gaming a little better on those PC's.

    Instead of games being nigh unplayable, are now running slightly more smoothly. With advance in this design, it could really work out better.

    Sure, for the time being, I don't doubt that the obvious choice would be to have a discrete component solution for gaming. However, there might be a point where that isn't in the gamers best interests anymore. I'm not a soothsayer, I don't know.

    Still, I can't only help but imagine how Intel's and AMD's ideas can only help everyone as a whole.

  • by Patoski ( 121455 ) on Sunday August 24, 2008 @09:40AM (#24725807) Homepage Journal

    Lots of people here and analysts have written off AMD. I think AMD is in a great position if they can survive their short term debt problems which is looking increasingly likely.

    Consider the following:

    • Intel's GPU tech is terrible.
    • Nvidia doesn't have an x86 design / manufacturing experience, x86 license, or even x86 technology they want to announce.
    • AMD currently has the best GPU technology and their technology is very close to Intel's for CPUs.

    AMD is in a great position like no other company to capitalize on the coming CPU / GPU convergence. Everyone jeered when AMD bought ATI but it is looking to be a great strategic move if they can execute on their strategy.

    AMD has the best mix of technology, they just have to put it to good use.

  • What bullshit. (Score:5, Interesting)

    by Anonymous Coward on Sunday August 24, 2008 @09:52AM (#24725859)

    From the SIGGRAPH paper they need something like 25 cores to run GoW at 60Hz. That's 1Ghz cores for comparison though. LRB will probably run at something like 3Ghz, meaning you only need like 8-9 cores to run GoW at 60, and with benchmarks stretching up to 48 cores you can see that this has the potential of being very fast indeed.

    More importantly, the LRB has much better utilization since there aren't any fixed function divisions in the hardware. E.g. most of the time you're not using the blend units. So why have all that hardware for doing floating point maths in the blending units when 99% of the time you're not actually using it? On LRB everything is utilized all the time. Blending, interpolation, stencil/alpha testing etc. is all done using the same functionality, meaning that when you turn something off (like blending) you get better performance rather than just leaving parts of your chip idle.

    I'd also like to point out that having a software pipeline means faster iteration, meaning that they have a huge opportunity to simply out-optimize nvidida and amd, even for the D3D/OGL pipelines.

    Furthermore, imagine intel suppyling half a dozen "profiles" for their pipeline where they optimize for various scenarios (e.g. deferred rendering, shadow volume heavy rendering, etc. etc.). The user can then try each with their games and run each game with a slightly different profile. More importantly, however, is that new games could just spend 30 minutes figuring out which profile suits them best, set a flag in the registry somewhere, and automatically get a big boost on LRB cards. That's a tiny amount of work to get LRB-specific performance wins.

    The next step in LRB-specific optimizations is to allow developers to essentially set up a LRB-config file for their title with lots of variables and tuning (remember that LRB uses a JIT compiled inner-loop that combines the setup, tests, pixel shader etc.). This would again be a very simple thing to do (and intel would probably do it for you if your title is high profile enough), and could potentially give you a massive win.

    And then of course the next step after that is LRB-specific code. I.e. you write stuff outside D3D/OGL to leverage the LRB specifically. This probably won't happen for many games, but you only need to convince Tim Sweeney and Carmack to do it, and then most of the high profile games will benefit automatically (through licensing). My guess is that you don't need to do much convincing. I'm a graphcis programmer myself and I'm gagging to get my hands on one of these chips! If/when we do I'll be at work on weekends and holidays coding up cool tech for it. I'd be surprised if Sweeney/Carmack aren't the same.

    I think LRB can be plenty competitive with nvidia and amd using the standard pipelines, and there's a very appealing low-fricion path for developers to take to leverage the LRB specifically with varying degrees of effort.

  • by MrMr ( 219533 ) on Sunday August 24, 2008 @09:58AM (#24725899)
    Sorry, but I did exactly that, and got bitten recently: NVidia's drivers for old graphics cards lag behind more and more. I can no longer update one of my systems because the ABI version for their GLX doesn't get updated.
    The fix would be trivial (just recompile the current version), but Nvidia clearly would rather sell me a new card.
  • by Hektor_Troy ( 262592 ) on Sunday August 24, 2008 @10:02AM (#24725927)

    Well, if you read the reviews, AMDs integrated graphics sollution 780g kicks ass. Only the very very newest Intel integrated chipset is slightly better, but that uses around 20W compared to AMD's chipset's 1W

  • by Bender_ ( 179208 ) on Sunday August 24, 2008 @10:09AM (#24725969) Journal

    Just looking at this from a manufacturing side:

    AMD is roughly two years behind Intel in semiconductor process technology. Due to this and other reasons (SOI, R&D/SGAA vs. revenue) they are in a very bad cost position. Even if they have a better design, Intel is easily able to offset this with pure manufacturing power.

    The playground is more level for Nvidia vs. ATI since both rely on foundries.

    It's tough to tell whether ATI/AMD will be able to capitalize on this situation. They are very lucky to have a new opportunity, otherwise they wold be toast.

    Two things are for certain: Nvidia is getting into rougher waters soon and Intel will not give up on this one easily.

  • by TheRaven64 ( 641858 ) on Sunday August 24, 2008 @10:21AM (#24726029) Journal

    Nvidia doesn't have an x86 design / manufacturing experience, x86 license, or even x86 technology they want to announce

    True. They do, however, have an ARM Cortex A8 system on chip, sporting up to four Cortex cores and an nVidia GPU in a package that consumes under 1W (down to under 250mW for the low-end parts). Considering the fact that the ARM market is currently an order of magnitude bigger than the x86 market and growing around two-three times faster, I'd say they're in a pretty good position.

  • by ZosX ( 517789 ) <zosxavius@gmQUOTEail.com minus punct> on Sunday August 24, 2008 @10:26AM (#24726057) Homepage
    Nvidia does indeed have license to x86. They acquired it when they bought all of 3dfx's intellectual property. They in fact manufacture a 386SX clone. Rumors have been persisting that they are looking to enter the x86 market. It should be noted that they are still relative outsiders in that their licensing doesn't extend into the x86-64 instruction set, which is taking over the market now.
  • by S3D ( 745318 ) on Sunday August 24, 2008 @10:41AM (#24726135)
    With all OpenGL extensions supported working properly, to latest and greatest from NVIDIA where I can never be sure which extension work on which driver with which card.
  • by sammyF70 ( 1154563 ) on Sunday August 24, 2008 @11:20AM (#24726349) Homepage Journal
    Yes. I know about ATI releasing the specs, which is why I said it might have gotten better now, though I guess it's going to be some time before we see anything happen (but it probably will)
  • by sortius_nod ( 1080919 ) on Sunday August 24, 2008 @11:26AM (#24726393) Homepage

    I wouldn't call it that.

    I'd call it a knee-jerk reaction to a non-issue.

    Nvidia are getting very scared now that ATi are beating them senseless. I run both ATi and Nvidia, so don't go down the "you're just a fanboy" angle either.

    I've seen chip makers come and go, this is just another attempt by Nvidia to try and sure up support for their product, but this time they can't turn to ATi and say "look how crap their chips are" - they have to do it to Intel who are aiming the chips at corporate markets.

    To be honest, the best bang for buck at the lower end of the market for 2D seems to be the Intel chips. One thing that does tend to surprise people is the complete lack of performance that the Nvidia chipsets have when not in 3D. ATi don't seem to have these problems having built around a solid base of 2D graphics engines in the 90's (Rage/RageII is at least one reason why people went with Macs back then). Nvidia is really feeling the pinch with ATi taking up the higher end of the market (pro-gear/high end HD) and intel suring up the lower end (GMA, etc). Nvidia pretty much are stuck with consumers buying their middle of the line gear (8600/9600).

    When you aim high you tend to hurt real when you fall from grace, the whole 8800 to 9800 leap was abysmal at best unlike their main competitor who really pulled their finger out to release the 3xxx & 4xxx series.

    All in all this seems like a bit of pork barrelling on Nvidia's part to detract from the complete lack of performance in their $1000 video card range. If anything this type of bullshit will be rewarded with a massive consumer (yes, geek and gamer) backlash.

    I know my products, I know their limitations - I don't need some exec talking crap to tell me, and base level consumers will never read it.

  • From 2006? (Score:2, Interesting)

    by iamwhoiamtoday ( 1177507 ) on Sunday August 24, 2008 @12:13PM (#24726709)
    Didn't the 8800 series come out at the end of 2006? The first gen 8800GTS 640MB and the 8800GTX 768MB those are still powerful video cards by today's standards.... so if Larrabee is "a GPU from 2006" then isn't that a compliment to Intel?
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Sunday August 24, 2008 @12:49PM (#24726987)
    Comment removed based on user account deletion
  • by serviscope_minor ( 664417 ) on Sunday August 24, 2008 @01:10PM (#24727135) Journal

    True. They do, however, have an ARM Cortex A8 system on chip, sporting up to four Cortex cores and an nVidia GPU in a package that consumes under 1W (down to under 250mW for the low-end parts). Considering the fact that the ARM market is currently an order of magnitude bigger than the x86 market and growing around two-three times faster, I'd say they're in a pretty good position.

    True, but the ARM market also has many more players. ARM will license their core to anyone, so you have Intel VS AMD VS VIA in x86 land and TI vs Philips (NXP now -- I LOVE this chip) VS Marvell (not a licensee, but they have a crummy chip for free) VS NVIDIA? VS Analog (that's kind of funny) VS IBM VS Fujitsu VS Freescale VS STM VS Cirrus VS Atmel VS Broadcom VS Nintendo VS Sharp VS Samsung VS ... VS there's probably even Xilinx in there for good measure.

    So, the market is larger, but the competition is stiffer.

    That said, if they made an EEE like machine with NVidia's graphics and 4x cortex cores, I'd buy one.

  • by fast turtle ( 1118037 ) on Sunday August 24, 2008 @01:36PM (#24727391) Journal

    Lots of people here and analysts have written off AMD. I think AMD is in a great position if they can survive their short term debt problems which is looking increasingly likely.

    Everyone forgets that AMD purchase ATI not for their GPU line (bonus) but for their North/South Bridge chipsets as it offered them the ability to finally provide a complete solution as Intel Does. For example - Asus only designs the PCB used in their Motherboard while using either an Intel chipset (Socket 775/ICH9/GMA3100) or an AMD (Socket AM2) with either an ATI northbride or the Nvidia Nforce Chipset. The only thing AMD gets from this is the Socket and CPU sell, while ATI or Nvidia gets the rest of the business. Not a good deal in comparison to Intel who gets everything but the PCB motherboard business.

    Remember who gets the blame if something's wrong with the board? Even if it's actually Intel's fault, Asus bites it, not Intel and that is just one of the advantages of producing not only the CPU but the entire chipset, which is exactly what AMD wants.

    Now that AMD is finally digesting the meal called ATI, we're beginning to see the advantages that we're the reason for the purchase. Improved energy efficiency and performance from AMD branded motherboards along with the profits. We're also going to begin seeing the real opensource support from ATI GPU's that everyone wants yet has been lacking to date (docs released to the devs under NDA's). In fact I can see ATI doing the same as Nvidia yet in reverse. Open Source drivers with a Windows only Binary blob for their video cards.

  • Re:What bullshit. (Score:4, Interesting)

    by Anonymous Coward on Sunday August 24, 2008 @01:44PM (#24727479)

    Some notes from Tim Sweeney in a discussion on this:

    "Note that the quoted core counts for AMD and NVIDIA are misleading.

    A GPU vendor quoting a "240 cores" is actually referring to a 15-core chip, with each core supporting 16-wide vectors (15*16=240). This would be roughly comparable to a 15-core Larrabee chip.

    Also keep in mind, a game engine need not use an architecture such as this heterongeneously. A cleaner implementation approach would be to compile and run 100% of the codebase on the GPU, treating the CPU solely as an I/O controller. Then, the programming model is homogeneous, cache-coherent, and straightforward.

    Given that GPUs in the 2009 timeframe will have multiple TFLOPs of computing power, versus under 100 GFLOPS for the CPU, there's little to lose by underutilizing the CPU.

    If Larrabee-like functionality eventually migrates onto the main CPU, then you're back to being purely homogeneous, with no computing power wasted.

    I agree that a homogeneous architecture is not just ideal, but a prerequisite to most developers adopting large-scale parallel programming.

    In consumer software, games are likely the only applications whose developers are hardcore enough to even contemplate a heterogeneous model. And even then, the programming model is sufficiently tricky that the non-homogeneous components will be underutilized.

    The big lesson we can learn from GPUs is that a powerful, wide vector engine can boost the performance of many parallel applications dramatically. This adds a whole new dimension to the performance equation: it's now a function of Cores * Clock Rate * Vector Width.

    For the past decade, this point has been obscured by the underperformance of SIMD vector extensions like SSE and Altivec. But, in those cases, the basic idea was sound, but the resulting vector model wasn't a win because it was far too narrow and lacked the essential scatter/gather vector memory addressing instructions.

    All of this shows there's a compelling case for Intel and AMD to put Larrabee-like vector units future mainstream CPUs, gaining 16x more performance on data-parallel code very economically.

    Tim Sweeney
    Epic Games"

  • Re:From 2006? (Score:3, Interesting)

    by Carbon016 ( 1129067 ) on Sunday August 24, 2008 @03:33PM (#24728597)
    I think we're talking more the 8600 esque chips. The only reason these older cards are performing well within spec to the newer ones is because app writers (esp. games) are focusing very heavily on optimization, counter to what the "PC GAMING IS DEAD" trolls might make you believe. Call of Duty 4 runs at an extremely high frame rate on a 6600GT, for christ's sake.
  • by serialdogma ( 883470 ) <black0hole@gmail.com> on Sunday August 24, 2008 @03:53PM (#24728821)

    The 386SX clone is from their acquisition of ULi, formerly a part of Acer. Also there 3dfx x86 licenses might not be much help -even if they haven't expired- because modern x86 is quite different with all the extensions like x64.

  • by ThisNukes4u ( 752508 ) * <tcoppi@@@gmail...com> on Sunday August 24, 2008 @04:54PM (#24729423) Homepage
    2d performance is more than just how fast you can refresh a framebuffer from memory. Check out x11perf -aa10, which tests drawing 10pt anti-aliased fonts. My radeon 9250 with open source drivers gets about a 2x better score than my brand new 4850 with fglrx. The difference is that ati/amd (and nvidia as well) don't spend nearly as much time optimizing these parts of the driver(considered "2d" but they really use the 3d engine) while you need hardware acceleration and driver support to do it at a good speed(which the open source r200 driver does, even faster than pure software on my not too sluggish phenom 9950).
  • by innocent_white_lamb ( 151825 ) on Sunday August 24, 2008 @07:38PM (#24730871)

    Intel has released comprehensive driver docs for a long time, and their driver still sucks.
     
    It's my understanding that the current performance (or lack thereof) of Intel graphic chipsets on Linux is due merely to the capabilities of the graphic chipsets themselves, not the driver.
     
    I always try to get Intel graphic chipsets on every computer that I buy. I don't do gaming and I do Linux exclusively, so Intel is the answer to "what graphic chipset should I get" at the moment.
     
    It's dead simple to make it work and it's adequate for what I do.

  • by DrYak ( 748999 ) on Sunday August 24, 2008 @08:26PM (#24731187) Homepage

    For something that was back in the early K8 days, it seems like 99% of the boards on the market today for AMD CPUs have nvidia/via chipsets.

    And since Phenom and AM2+ socket appeared, 99% of the boards on the market for these use nvidia/ati chipset.

    The few VIA based motherboards you can see usually are based on derivative of the KT800 chipset that was already available back in the early K8 days (as the memory controller in on the CPU and the chipset only communicates using HyperTransport - one can pretty much mix'n'mach most chipset almost regardless of the processor generation).
    And these mainboards are targeted to the budget segment (usually feature only a couple of slots, and sometimes integrated graphics).

    All the high-end boards are nvidia or ati based.
    The ATI are specially popular in research because they provide 4 long PCIe slots (16x physical, usually 8x bandwith when all 4 in use), often in altening succession (one PCIe 16 each to slot) enabling scientist to put 4 dual-slot cards for GPGPU (CUDA or Brook)

    I'm not really seeing a lot of boards on the market (especially carried in stores) that use the AMD chipset.

    I don't know, maybe the few stores you checked either carry only old (pre-Phenom) motherboard or sell more nvidia-based because they are popular because of the SLI support.

    But most on-line shop I use have both nvidia and ati based motherboards.

  • by CajunArson ( 465943 ) on Sunday August 24, 2008 @08:59PM (#24731453) Journal

    Yeah... good luck getting OSS ATI drivers that actually drive their newest chips any time soon specs or no specs. The difference is that Intel is literally making a specialized version of x86 that is massively vectorized for doing stream processing, with graphics merely being the most common task that such a processor would be built for on a desktop. The differences in programmability will be pretty massive, and "drivers" in the traditional sense might not even apply... the chip could literally be treated like a specialized CPU.
          While ATI & Nvidia are probably correct that Larrabee will not beat their chips in 2010, the difference is that Intel is designing a chip that will forever alter how OSS & Linux systems operate when it comes to graphics... forget about begging for specs to some bizarre and bug-riddled chip (GPUs routinely ship with errata that would force a CPU maker to have massive recalls) Larrabee will make general purpose graphics computing a reality. Intel may be doing more for graphics on Linux than any other company in history, even though it is probably not Intel's direct intent to merely help Linux.

  • by innocent_white_lamb ( 151825 ) on Sunday August 24, 2008 @11:09PM (#24732317)

    Hopefully ATI hardware is, and this time around ATI/AMD's open source commitment is sincere.
     
    Hear hear.
     
    I would like nothing better than to be able to recommend ATI hardware as an alternative to Intel on Linux boxes. The more choices there are the better it will be.
     
    Unfortunately, as I said, Intel seems to be the only game in town right at this moment.

  • Re:Doh of the Day (Score:1, Interesting)

    by Anonymous Coward on Monday August 25, 2008 @01:30AM (#24733147)

    Intel does understand the memory bandwidth/latency issues. Read the Larrabee Siggraph paper:

    http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf

  • by Sj0 ( 472011 ) on Monday August 25, 2008 @09:30AM (#24735929) Journal

    I'm fixated on what engineers use their computers for.

    I design things all day, and all I've got, all I need, is an ancient Intel 865 video chipset built into the motherboard of my Dell Optiplex.

    I don't want or need a GPU, neither does anyone else in our department.

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...