Intel Reveals the Future of the CPU-GPU War 231
Arun Demeure writes "Beyond3D has once again obtained new information on Intel's plans to compete against NVIDIA and AMD's graphics processors, in what the Chief Architect of the project presents as a 'battle for control of the computing platform.' He describes a new computing architecture based on the many-core paradigm with super-wide execution units, and the reasoning behind some of the design choices. Looks like computer scientists and software programmers everywhere will have to adapt to these new concepts, as there will be no silver bullet to achieve high efficiency on new and exotic architectures."
Great! (Score:3, Informative)
The CPU wars have finally gotten interesting again. I'm going to go grab some popcorn.
Re:Great! (Score:4, Interesting)
Re: (Score:3, Funny)
Re:Great! (Score:5, Interesting)
Easiest way to make sure a product doesn't meet expectations is to raise expectations.
Great expectations! (Score:2, Funny)
Sex with geeks is great!
Ken Kutaragi thinks Sony is the devil? (Score:4, Informative)
No, mostly, it was Ken Kutaragi.
Having an extensive history of reporting on Sony, I'm sure you remember he did the exact same thing when hyping the PS2's emotion engine.
Re: (Score:2, Interesting)
Re:Great! (Score:5, Informative)
It's very expensive to have DSP code written, when compared to normal CPU code, and video game manufacturers have been complaining that the cost of making a game is too high. Also, most of the complexity in a video game nowadays is handled by the GPU, not the CPU. Now the cell would be great for lots of parallel signal processing, or some other similar task, and I bet it could be used to create a great video game, it would just be prohibitively expensive.
The cell is a great solution to a problem. However, that problem isn't video games. A fast traditional CPU, possibly with multiple cores, attached to a massively pipelined GPU would probably work better for video games.
Re: (Score:2)
The Cell advantage lies in having one and only one kind of SPU (or über DSP) in the architecture instead of the myriad of different GPUs on the market.
That's an advantage Both Intel and AMD will try to get by making the GPU instruction set a standard much like the 80486 incorporated the 80387 instruction set in a single standard.
Re: (Score:2, Informative)
All a DSP does is do lots of arithmetic operations and memory moves quickly. For example, in a single instruction, the DSP could run an addition and shif
Re: (Score:2)
Since Sony announced the oddity last June (verbatim: "~16MB/s (no, this is not a typo)"), I presume it means all PS3 Cell CPUs past and present will carry it on... or at the very least, game developers will not be allowed to use the extra bandwidth should DMA reads be fixed to maintain compatibility with first-gen PS3s.
Re: (Score:3, Insightful)
Except when it ain't. Lemme see, entire new programming model..... haven't we heard this song before? Something about HP & Intel going down on the Itanic? Ok, Intel survived the experience but HP is pretty much out of the processor game and barely hanging in otherwise.
Yes it would be great if we could finally escape the legacy baggage of x86, but it ain't going to happen
Re: (Score:2)
Re: (Score:2)
I think some programmers have suggested that perhaps, as a possibility, changing the programming model to something far more difficult to exploit was not the smartest choice in a competitive console market.
But I don't think anyone really 'cried' about it. They just chose not to invest on it, yet.
Even the skeptics seemed to think Cell was pretty cool by itself (me included).
What people 'cried about' was the price tag, the misleading trailers / screenshots, the
Re: (Score:3, Funny)
yay (Score:2, Interesting)
Re: (Score:3, Informative)
Schwab
Re:yay (Score:5, Insightful)
So you can bash intel graphics all you like but for F/OSS users they could end up as the only game in town. We're not usually playing the latest first person shooters, performance only need be "good enough".
Re: (Score:2)
Re: (Score:2)
As for intel graphics on notebooks, I agree, there is nothing like having a component in a notebook where you don't have to worry about it being some bizarre non standard randomly hacked (or firmware crippled), especially when you are
Re: (Score:2)
Re: (Score:2)
This isn't a graphics processor issue - even the crappiest embedded graphics cards that show up in mainstream consumer PCs can easily handle modes like 1600x1200. That's been true for years now.
The problem is twofold: Display technology, and user acceptance of crappy displays. This has just been made worse by the transition to LCD displays - that probably set us back 4 years on screen resolution gains by itself.
Re: (Score:3)
Sure there is (Score:5, Insightful)
Re:Sure there is (Score:5, Insightful)
This kind of pisses me off. People who are functional programming ethusists are always telling other people that they should be using functional languages but they never write anything significant in these languages.
Re: (Score:3, Interesting)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
In the mean time, you were given a nice example -- ATC systems and telephone exchanges, and in the mean time, you can have a research paper about map reduce [216.239.37.132] -- if you don't belive me, belive the peer reviewed research.
Re: (Score:2, Insightful)
Re: (Score:2)
BOINC [berkeley.edu]?
Re: (Score:2)
Re:Sure there is (Score:4, Informative)
Re: (Score:2)
And just how many open source applications that rely on parallelisation (and we won't count any kind of parallelisation where there's mostly no coordination required between threads/processes, such as Apache) have been written recently?
Re: (Score:2)
Look, I don't think I'm asking something too unreasonable here. If functional languages are so much easier to write multithreaded apps using, then show me. Either point at something that already exists which demonstrates this claim or write something new that demonstrates this claim.
The claim has been made, back it up.
Re: (Score:2)
By limiting to open source only, I think you are.
Let's take the classic example of a difficult multi-threading problem: a DBMS. It's not completely separable, like an FTP server, and it's not effectively serialised, like an X11 server.
Have people written highly-threaded DBMSes in functional languages? Sure. [erlang.org] Are they open source? Probably not. There are a few decent open source DBMSes, so there's really no itch to scratch. (So why was Mn
Re: (Score:2)
Re: (Score:2)
First off, I didn't make the original claim.
Secondly, I agree with you that if you can't see the source code, it's not science.
But thirdly, I still think you're being unfair. You don't need to see the plans for a Boeing aircraft and a Tupole
Re: (Score:2)
Re: (Score:2)
Seriously though, looking at the Debian compiler shootout, ocaml does fairly well at CPU intensive applications. In the past I've seen some applications claimed to be written in it (unison), but I can't find a decent list of them, and I don't remember any being parallel / distributed. Yahoo! stores was apparently written in Lisp, and Paul Graham won't shut up about it.
But the GGP is talking about extending such languages, already sparsely in use, to distributed computations. Erlang is the p
Re: (Score:2)
Last I checked they were running software to compute aerodynamic loads on space lift boosters on the cluster. It's one of those jobs that just runs for days and weeks even on racks of dual CPU linux boxes. It was an optimization se
Re: (Score:2, Insightful)
Re: (Score:2)
Re: (Score:2, Interesting)
Of course Google'
Re: (Score:2)
The bottom line is that until we change architecture, we have to translate out fu
Re: (Score:3, Insightful)
Again, no one disagrees with your idea that writing in a functional style is a good idea for parallel programming, but the OP said that we should give up on two specific languages and pick up a functional one. Clearly a program in a functional style can be written in C++ (which is a superset of C89, more or less, which is one of the
Re: (Score:2)
Re: (Score:2)
I (as someone who works at a supercomputing center) am still waiting for a parallel weather forcasting code or hypersonic aerothermodynamics code or the like written in one of the many functional languages touted here. I deal with such codes on a daily basis, and they're all written in Fortran, C, and C++ (in that (decreasing) order or occurrence). These are th
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Would the telephone switching systems that Erlang was made for the express purpose of implementing count, or the air traffic control systems also implemented with it, or would it have to be something more of a significant application than that?
If games are more your thing, how about this [lambda-the-ultimate.org].
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Jane Street Capital [janestcapital.com].
A high performance Erlang webserver [hyber.org].
Re: (Score:2)
I'm actually a fan of functional languages.. in so far as they are part of formal software development.
Re: (Score:2)
I think that's true of all people who claim to have a silver bullet. For example, I'll be impressed when the "eXtreme Programming 100% test coverage, no checkins without a new test passing" crowd actually manage to write a kernel like that, including tests demonstrating complex race conditions
Re:Sure there is (Score:5, Insightful)
Extra credit if you can do transaction-level device control over USB.
Schwab
Re: (Score:2)
Re:Sure there is (Score:5, Insightful)
Frag [haskell.org] which was done for an undergrad dissertation using yampa and haskell to make an FPS.
Haskell Doom [cin.ufpe.br] which is pretty obvious.
A few more examples [cin.ufpe.br].
I dunno if that satisfies your requirements or not. Though I don't quite get how this is relevant to the GP's post. This seems like more of a gripe with Haskell than anything. But if I've missed something, please elaborate.
Re: (Score:3, Insightful)
Re:Sure there is (Score:5, Insightful)
I've done some rudimentary reading on functional programming languages -- mostly Haskell and LISP (which is sorta FP) -- and I believe you when cite all the claimed benefits. The architecture of the languages certainly enables it.
However, every time I've tried to get a handle on Haskell, all the examples presented tend to be abstract. In other words, they contrive a problem that Haskell is fairly well-suited to solving, and then write a solution in Haskell, using data structures and representations entirely internal to Haskell. "Poof! Elegance!" Well, um...
I'm a gaming, graphics, and device driver geek, and so my explorations of new stuff tend to lean heavily in that direction. I'm interested in more "concrete" expressions of software operation. Could Haskell offer new or interesting possibilities in network packet filtering? Perhaps, but first you have to read reams of text on how to bludgeon the language into reading and writing raw bits.
The other issue with FP is that they tend to treat all problems as a collection of simultaneous equations -- things that can be evaluated at any time in any order. There's a huge class of computing problems that can't be described that way. You can't unprint a page on the line printer. There are facilities for sequencing/synchronizing operations (Haskell's monads, for instance), but I get the impression that FP's elegance starts to fall apart when you start using them.
Understand that my exposure to FP in general and Haskell in particular is less than perfunctory, and am very likely misunderstanding a great deal. I'd like to learn and understand more about FP, but so far I haven't encountered the "Ah-hah!" example yet.
Schwab
It's an old argument (Score:4, Insightful)
Back in the 70's, people like Jack Dennis used to promise the DARPA that they could parallelize the old Fortran code used to do complex military simulations by converting the Fortran code to a pure functional language. It would be wonderful! Well, they couldn't, and it wasn't.
The above notwithstanding, IF you can coerce a problem into a form in which a functional language can be effectively employed, the benefits can be huge. The code tends to be more elegant and more readable; algorithms that would be difficult to write in an applicative language like C become easy; data structure manipulation is trivial; and so on. Arguments that functional languages are "slow" have been debunked. Arguments that functional languages must be interpreted are wrong.
And, all the syntactic nonsense of C++ and the rest of the "object oriented" languages can be (mercifully) shed. Pure functional languages are object oriented by nature. However, functional languages do have their own idiosyncracies, such as the infamous Lisp "quote", and implementation-dependent funarg problems. So there are cobwebs still.
To sum up: If you have a hard algorithmic problem to solve, a functional language will probably be a better choice, even if you end up re-coding the algorithm in an applicative language later. If you have a device driver to write, though, roll up your sleeves and get out the C manual. But first: make sure to put a debug wrapper around your mallocs (and pad your malloc blocks with patterns on both sides) so you can trap double-frees, underwrites, and overwrites. It will pay many dividends.
Re: (Score:2)
This is what lazy evaluation is for. You actually can describe printing a page functionally. What'll happen, internally, is that the topmost print function won't terminate until it has satisfied all its "dependencies". Just
Not quite (Score:2)
You do have a point. I wrote the Concurrency chapter in The Java Tutorial (yeah, yeah, it wasn't my idea to assign a bunch of tech writers to write what's essentially a CS textbook), struggled for 15 pages just to discuss the basics of the topic, and ran out of time to cover more than half of what I sho
Re: (Score:3, Interesting)
You do have a point. I wrote the Concurrency chapter in The Java Tutorial, struggled for 15 pages just to discuss the basics of the topic, and ran out of time to cover more than half of what I should have. Most of what I wrote was about keeping your threads consistent, statewise.
Ideally I would like to see Java take up something like this [inf.ethz.ch] as a means to handle concurrency. It is simple, easy to understand, easy to reason about concurrent code, and doesn't require stepping very far outside standard stateful OO programming methods. Whether that will actually happen is, of course, another question, but something along those lines does offer a good way forward.
Re: (Score:2)
An "abstract math gene" probably isn't the issue. It's just an issue of practice - most good programmers today have many years of experience working with object oriented and procedural programming languages. Functional programming is different, and it takes practice to get used to - years of practice to be as comfortable as what you're used to.
Further, a pure functional system is utterly useless since IO is a side effect. This doesn't change the fact
Re: (Score:2)
Abandon C and Fortran. Functional programing makes multithreading easy and programs can be written for parallel execution with ease. And as an added benefit, goodbye buffer overflows and double frees!
Functional languages seem to regularly get trotted out when the subject of multi-cores and multi-threading comes up, but it really isn't a solution -- or, at least, it isn't a solution in and of itself. If you program in a completely pure functional manner with no state then, yes, things can be parallelised. The reality is that isn't really a viable option for a lot of applications. State is important. Functional languages realise this, and have ways to cope. With the ML family you get a little less purity
Re: (Score:3, Interesting)
Re: (Score:2)
Anyway, you seriously expect someone to click on a TINYURL link in a Slashdot sig?
Re: (Score:3, Insightful)
Re: (Score:2, Interesting)
When people write games, they do all kinds of crazy stunts to ensure they have as few multiplications as possible. Can you really trust a compiler to get the code right for that tight inner loop? Figuring out parallelism might be hard, but game programming has always been hard.
Also, you avoided mentioning memory. It doesn't matter if Haskell uses marginally less memory if it's in the wrong place when you need it. Is that texture in RA
Re: (Score:2)
What makes you think a compiler will be able to do it better than a human?
Guess you missed the mention of "ultra-wide execution units" in the summary. Think Itanium, think of scheduling in terms of "bundles" of simultaneous instructions, ponder how to group the instructions in your multiple threads of execution so that if one thread branches, the code for the other threads is in the same bundle as the code the one thread branches to. I'm sure these chips will run fast, because they won't have to worry about coordinating "in flight" instructions, or keeping register scoreboards
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:Sure there is (Score:4, Interesting)
Re: (Score:3, Interesting)
Re:Sure there is (Score:4, Interesting)
Cell (Score:3, Insightful)
Re: (Score:2)
IBM's cell processor would be a lot more useful in general if it was slightly modified. replace one or two aux cores with GPu cores. 2-4 as more general purpose cores, and one or two FPGA style cores, possibly with preset configuration options(ie audio processing, physics, video, etc).
Complicated, yes. But can you imagine what one could do. Your single computer could be switched on the fly to encode an audio and video stream far
Astroturf (Score:5, Insightful)
Arun Demeure writes "Beyond3D has once again obtained new information...
If you are going to submit your own articles [beyond3d.com] to Slashdot, at least have the decency to admit this instead of talking about yourself in the third-person.
Re: (Score:2)
future of computing? (Score:2, Interesting)
We need a new architecture (Score:5, Interesting)
It cannot be argued that x86 is best architecture ever made, we all know it's not... but it is the one with the most research. We need the top companies in the industry, Intel, AMD, MS, etc. to sit down and design an entirely new specification going forward.
New processor architecture, a new form factor, a new power supply, etc...
Google has demonstrated that a single voltage PSU is more efficient, and completely do able. There is little reason that we still use internal cards to add functionality to our systems, couldn't these be more like cartridges so you don't need to open the case?
Why not do away with most of the legacy technology in one swoop and update the entire industry to a new standard.
PS, I know why, money, too much investment in the old to be worth creating the new. But I can dream can't I?
Re:We need a new architecture (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I agree. However, one should design a new CPU architecture based on a software model, not the other way around as was done with the CELL cpu.
PS, I know why, money, too much investment in the old to be worth creating the new. But I can dream can't I?
Yes you can dream. But unless the new architecture is going to solve a big problem in the industry, it's not worth it. The biggest problem in the comp
Similiar to what sun is doing with Niagara (Score:4, Interesting)
The first Niagara CPUs were terrible at floating point math, so they were only good for web-servers. The next generation I hear are supposed to be better at FPU ops.
'battle for control of the computing platform' (Score:2)
They will just not be supported... (Score:2)
Intel against NVIDIA/ATI/AMD? OSS? (Score:5, Insightful)
NVIDIA's Linux drivers are pretty good, but ATI/AMD's are god awful, and both NVIDIA's & AMD/ATI's are much more difficult to use than Intels.
I'd love to see an Intel GPU/CPU platform that was performance competitive with ATI/AMD or NVIDIA's offerings.
Re: (Score:2)
Yaaaay! (Score:2)
Good for linux (Score:3, Insightful)
News at 11:Sometimes specialized hardware is fast (Score:2)
Re: (Score:2)
Re: (Score:2)
Very insightful of me, eh?
As to it being the dumbest thing ever... again... a joke, OK? Sheeze.