Nvidia Claims Intel's Larrabee Is "a GPU From 2006" 278
Barence sends this excerpt from PC Pro:
"Nvidia has delivered a scathing criticism of Intel's Larrabee, dismissing the multi-core CPU/GPU as wishful thinking — while admitting it needs to catch up with AMD's current Radeon graphics cards. 'Intel is not a stupid company,' conceded John Mottram, chief architect for the company's GT200 core. 'They've put a lot of people behind this, so clearly they believe it's viable. But the products on our roadmap are competitive to this thing as they've painted it. And the reality is going to fall short of the optimistic way they've painted it. As [blogger and CPU architect] Peter Glaskowsky said, the "large" Larrabee in 2010 will have roughly the same performance as a 2006 GPU from Nvidia or ATI.' Speaking ahead of the opening of the annual NVISION expo on Monday, he also admitted Nvidia 'underestimated ATI with respect to their product.'"
Gee, How "Forward Thinking" of You, NVidia! (Score:5, Interesting)
"OH MY GOD! CPU AND GPU ON ONE DIE IS STOOOOOOOOPIIIIIDDDDDEDEDDDD!!!1111oneoneone"
How stupid is it really? So what if the average consumer actually knows very little about their PC. That doesn't necessarily mean it won't be put into a person's PC.
If they were really forward thinking, they could see it as an effort to bridge the gap between low-end PC's and high-end PC's. Now maybe, at some point in the future, people can do gaming a little better on those PC's.
Instead of games being nigh unplayable, are now running slightly more smoothly. With advance in this design, it could really work out better.
Sure, for the time being, I don't doubt that the obvious choice would be to have a discrete component solution for gaming. However, there might be a point where that isn't in the gamers best interests anymore. I'm not a soothsayer, I don't know.
Still, I can't only help but imagine how Intel's and AMD's ideas can only help everyone as a whole.
AMD is in the Best Position (Score:5, Interesting)
Lots of people here and analysts have written off AMD. I think AMD is in a great position if they can survive their short term debt problems which is looking increasingly likely.
Consider the following:
AMD is in a great position like no other company to capitalize on the coming CPU / GPU convergence. Everyone jeered when AMD bought ATI but it is looking to be a great strategic move if they can execute on their strategy.
AMD has the best mix of technology, they just have to put it to good use.
What bullshit. (Score:5, Interesting)
From the SIGGRAPH paper they need something like 25 cores to run GoW at 60Hz. That's 1Ghz cores for comparison though. LRB will probably run at something like 3Ghz, meaning you only need like 8-9 cores to run GoW at 60, and with benchmarks stretching up to 48 cores you can see that this has the potential of being very fast indeed.
More importantly, the LRB has much better utilization since there aren't any fixed function divisions in the hardware. E.g. most of the time you're not using the blend units. So why have all that hardware for doing floating point maths in the blending units when 99% of the time you're not actually using it? On LRB everything is utilized all the time. Blending, interpolation, stencil/alpha testing etc. is all done using the same functionality, meaning that when you turn something off (like blending) you get better performance rather than just leaving parts of your chip idle.
I'd also like to point out that having a software pipeline means faster iteration, meaning that they have a huge opportunity to simply out-optimize nvidida and amd, even for the D3D/OGL pipelines.
Furthermore, imagine intel suppyling half a dozen "profiles" for their pipeline where they optimize for various scenarios (e.g. deferred rendering, shadow volume heavy rendering, etc. etc.). The user can then try each with their games and run each game with a slightly different profile. More importantly, however, is that new games could just spend 30 minutes figuring out which profile suits them best, set a flag in the registry somewhere, and automatically get a big boost on LRB cards. That's a tiny amount of work to get LRB-specific performance wins.
The next step in LRB-specific optimizations is to allow developers to essentially set up a LRB-config file for their title with lots of variables and tuning (remember that LRB uses a JIT compiled inner-loop that combines the setup, tests, pixel shader etc.). This would again be a very simple thing to do (and intel would probably do it for you if your title is high profile enough), and could potentially give you a massive win.
And then of course the next step after that is LRB-specific code. I.e. you write stuff outside D3D/OGL to leverage the LRB specifically. This probably won't happen for many games, but you only need to convince Tim Sweeney and Carmack to do it, and then most of the high profile games will benefit automatically (through licensing). My guess is that you don't need to do much convincing. I'm a graphcis programmer myself and I'm gagging to get my hands on one of these chips! If/when we do I'll be at work on weekends and holidays coding up cool tech for it. I'd be surprised if Sweeney/Carmack aren't the same.
I think LRB can be plenty competitive with nvidia and amd using the standard pipelines, and there's a very appealing low-fricion path for developers to take to leverage the LRB specifically with varying degrees of effort.
Re:Better than NVIDIA's proprietary hardware (Score:4, Interesting)
The fix would be trivial (just recompile the current version), but Nvidia clearly would rather sell me a new card.
Re:AMD is in the Best Position (Score:5, Interesting)
Well, if you read the reviews, AMDs integrated graphics sollution 780g kicks ass. Only the very very newest Intel integrated chipset is slightly better, but that uses around 20W compared to AMD's chipset's 1W
Re:AMD is in the Best Position (Score:3, Interesting)
Just looking at this from a manufacturing side:
AMD is roughly two years behind Intel in semiconductor process technology. Due to this and other reasons (SOI, R&D/SGAA vs. revenue) they are in a very bad cost position. Even if they have a better design, Intel is easily able to offset this with pure manufacturing power.
The playground is more level for Nvidia vs. ATI since both rely on foundries.
It's tough to tell whether ATI/AMD will be able to capitalize on this situation. They are very lucky to have a new opportunity, otherwise they wold be toast.
Two things are for certain: Nvidia is getting into rougher waters soon and Intel will not give up on this one easily.
Re:AMD is in the Best Position (Score:5, Interesting)
Nvidia doesn't have an x86 design / manufacturing experience, x86 license, or even x86 technology they want to announce
True. They do, however, have an ARM Cortex A8 system on chip, sporting up to four Cortex cores and an nVidia GPU in a package that consumes under 1W (down to under 250mW for the low-end parts). Considering the fact that the ARM market is currently an order of magnitude bigger than the x86 market and growing around two-three times faster, I'd say they're in a pretty good position.
Re:AMD is in the Best Position (Score:5, Interesting)
I'f prefer stable releae from 2006... (Score:5, Interesting)
Re:Intel isn't aiming at gamers (Score:3, Interesting)
Re:Intel isn't aiming at gamers (Score:4, Interesting)
I wouldn't call it that.
I'd call it a knee-jerk reaction to a non-issue.
Nvidia are getting very scared now that ATi are beating them senseless. I run both ATi and Nvidia, so don't go down the "you're just a fanboy" angle either.
I've seen chip makers come and go, this is just another attempt by Nvidia to try and sure up support for their product, but this time they can't turn to ATi and say "look how crap their chips are" - they have to do it to Intel who are aiming the chips at corporate markets.
To be honest, the best bang for buck at the lower end of the market for 2D seems to be the Intel chips. One thing that does tend to surprise people is the complete lack of performance that the Nvidia chipsets have when not in 3D. ATi don't seem to have these problems having built around a solid base of 2D graphics engines in the 90's (Rage/RageII is at least one reason why people went with Macs back then). Nvidia is really feeling the pinch with ATi taking up the higher end of the market (pro-gear/high end HD) and intel suring up the lower end (GMA, etc). Nvidia pretty much are stuck with consumers buying their middle of the line gear (8600/9600).
When you aim high you tend to hurt real when you fall from grace, the whole 8800 to 9800 leap was abysmal at best unlike their main competitor who really pulled their finger out to release the 3xxx & 4xxx series.
All in all this seems like a bit of pork barrelling on Nvidia's part to detract from the complete lack of performance in their $1000 video card range. If anything this type of bullshit will be rewarded with a massive consumer (yes, geek and gamer) backlash.
I know my products, I know their limitations - I don't need some exec talking crap to tell me, and base level consumers will never read it.
From 2006? (Score:2, Interesting)
Comment removed (Score:5, Interesting)
Re:AMD is in the Best Position (Score:4, Interesting)
True. They do, however, have an ARM Cortex A8 system on chip, sporting up to four Cortex cores and an nVidia GPU in a package that consumes under 1W (down to under 250mW for the low-end parts). Considering the fact that the ARM market is currently an order of magnitude bigger than the x86 market and growing around two-three times faster, I'd say they're in a pretty good position.
True, but the ARM market also has many more players. ARM will license their core to anyone, so you have Intel VS AMD VS VIA in x86 land and TI vs Philips (NXP now -- I LOVE this chip) VS Marvell (not a licensee, but they have a crummy chip for free) VS NVIDIA? VS Analog (that's kind of funny) VS IBM VS Fujitsu VS Freescale VS STM VS Cirrus VS Atmel VS Broadcom VS Nintendo VS Sharp VS Samsung VS ... VS there's probably even Xilinx in there for good measure.
So, the market is larger, but the competition is stiffer.
That said, if they made an EEE like machine with NVidia's graphics and 4x cortex cores, I'd buy one.
Re:AMD is in the Best Position (Score:2, Interesting)
Lots of people here and analysts have written off AMD. I think AMD is in a great position if they can survive their short term debt problems which is looking increasingly likely.
Everyone forgets that AMD purchase ATI not for their GPU line (bonus) but for their North/South Bridge chipsets as it offered them the ability to finally provide a complete solution as Intel Does. For example - Asus only designs the PCB used in their Motherboard while using either an Intel chipset (Socket 775/ICH9/GMA3100) or an AMD (Socket AM2) with either an ATI northbride or the Nvidia Nforce Chipset. The only thing AMD gets from this is the Socket and CPU sell, while ATI or Nvidia gets the rest of the business. Not a good deal in comparison to Intel who gets everything but the PCB motherboard business.
Remember who gets the blame if something's wrong with the board? Even if it's actually Intel's fault, Asus bites it, not Intel and that is just one of the advantages of producing not only the CPU but the entire chipset, which is exactly what AMD wants.
Now that AMD is finally digesting the meal called ATI, we're beginning to see the advantages that we're the reason for the purchase. Improved energy efficiency and performance from AMD branded motherboards along with the profits. We're also going to begin seeing the real opensource support from ATI GPU's that everyone wants yet has been lacking to date (docs released to the devs under NDA's). In fact I can see ATI doing the same as Nvidia yet in reverse. Open Source drivers with a Windows only Binary blob for their video cards.
Re:What bullshit. (Score:4, Interesting)
Some notes from Tim Sweeney in a discussion on this:
"Note that the quoted core counts for AMD and NVIDIA are misleading.
A GPU vendor quoting a "240 cores" is actually referring to a 15-core chip, with each core supporting 16-wide vectors (15*16=240). This would be roughly comparable to a 15-core Larrabee chip.
Also keep in mind, a game engine need not use an architecture such as this heterongeneously. A cleaner implementation approach would be to compile and run 100% of the codebase on the GPU, treating the CPU solely as an I/O controller. Then, the programming model is homogeneous, cache-coherent, and straightforward.
Given that GPUs in the 2009 timeframe will have multiple TFLOPs of computing power, versus under 100 GFLOPS for the CPU, there's little to lose by underutilizing the CPU.
If Larrabee-like functionality eventually migrates onto the main CPU, then you're back to being purely homogeneous, with no computing power wasted.
I agree that a homogeneous architecture is not just ideal, but a prerequisite to most developers adopting large-scale parallel programming.
In consumer software, games are likely the only applications whose developers are hardcore enough to even contemplate a heterogeneous model. And even then, the programming model is sufficiently tricky that the non-homogeneous components will be underutilized.
The big lesson we can learn from GPUs is that a powerful, wide vector engine can boost the performance of many parallel applications dramatically. This adds a whole new dimension to the performance equation: it's now a function of Cores * Clock Rate * Vector Width.
For the past decade, this point has been obscured by the underperformance of SIMD vector extensions like SSE and Altivec. But, in those cases, the basic idea was sound, but the resulting vector model wasn't a win because it was far too narrow and lacked the essential scatter/gather vector memory addressing instructions.
All of this shows there's a compelling case for Intel and AMD to put Larrabee-like vector units future mainstream CPUs, gaining 16x more performance on data-parallel code very economically.
Tim Sweeney
Epic Games"
Re:From 2006? (Score:3, Interesting)
Re:AMD is in the Best Position (Score:2, Interesting)
The 386SX clone is from their acquisition of ULi, formerly a part of Acer. Also there 3dfx x86 licenses might not be much help -even if they haven't expired- because modern x86 is quite different with all the extensions like x64.
Re:Intel isn't aiming at gamers (Score:4, Interesting)
Re:Intel isn't aiming at gamers (Score:2, Interesting)
Intel has released comprehensive driver docs for a long time, and their driver still sucks.
It's my understanding that the current performance (or lack thereof) of Intel graphic chipsets on Linux is due merely to the capabilities of the graphic chipsets themselves, not the driver.
I always try to get Intel graphic chipsets on every computer that I buy. I don't do gaming and I do Linux exclusively, so Intel is the answer to "what graphic chipset should I get" at the moment.
It's dead simple to make it work and it's adequate for what I do.
Phenom landscape is different (Score:3, Interesting)
For something that was back in the early K8 days, it seems like 99% of the boards on the market today for AMD CPUs have nvidia/via chipsets.
And since Phenom and AM2+ socket appeared, 99% of the boards on the market for these use nvidia/ati chipset.
The few VIA based motherboards you can see usually are based on derivative of the KT800 chipset that was already available back in the early K8 days (as the memory controller in on the CPU and the chipset only communicates using HyperTransport - one can pretty much mix'n'mach most chipset almost regardless of the processor generation).
And these mainboards are targeted to the budget segment (usually feature only a couple of slots, and sometimes integrated graphics).
All the high-end boards are nvidia or ati based.
The ATI are specially popular in research because they provide 4 long PCIe slots (16x physical, usually 8x bandwith when all 4 in use), often in altening succession (one PCIe 16 each to slot) enabling scientist to put 4 dual-slot cards for GPGPU (CUDA or Brook)
I'm not really seeing a lot of boards on the market (especially carried in stores) that use the AMD chipset.
I don't know, maybe the few stores you checked either carry only old (pre-Phenom) motherboard or sell more nvidia-based because they are popular because of the SLI support.
But most on-line shop I use have both nvidia and ati based motherboards.
Re:Intel isn't aiming at gamers (Score:2, Interesting)
Yeah... good luck getting OSS ATI drivers that actually drive their newest chips any time soon specs or no specs. The difference is that Intel is literally making a specialized version of x86 that is massively vectorized for doing stream processing, with graphics merely being the most common task that such a processor would be built for on a desktop. The differences in programmability will be pretty massive, and "drivers" in the traditional sense might not even apply... the chip could literally be treated like a specialized CPU.
While ATI & Nvidia are probably correct that Larrabee will not beat their chips in 2010, the difference is that Intel is designing a chip that will forever alter how OSS & Linux systems operate when it comes to graphics... forget about begging for specs to some bizarre and bug-riddled chip (GPUs routinely ship with errata that would force a CPU maker to have massive recalls) Larrabee will make general purpose graphics computing a reality. Intel may be doing more for graphics on Linux than any other company in history, even though it is probably not Intel's direct intent to merely help Linux.
Re:Intel isn't aiming at gamers (Score:2, Interesting)
Hopefully ATI hardware is, and this time around ATI/AMD's open source commitment is sincere.
Hear hear.
I would like nothing better than to be able to recommend ATI hardware as an alternative to Intel on Linux boxes. The more choices there are the better it will be.
Unfortunately, as I said, Intel seems to be the only game in town right at this moment.
Re:Doh of the Day (Score:1, Interesting)
Intel does understand the memory bandwidth/latency issues. Read the Larrabee Siggraph paper:
http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf
Re:Artists, engineers, scientists... (Score:3, Interesting)
I'm fixated on what engineers use their computers for.
I design things all day, and all I've got, all I need, is an ancient Intel 865 video chipset built into the motherboard of my Dell Optiplex.
I don't want or need a GPU, neither does anyone else in our department.