Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Businesses Technology

Intel Reveals the Future of the CPU-GPU War 231

Arun Demeure writes "Beyond3D has once again obtained new information on Intel's plans to compete against NVIDIA and AMD's graphics processors, in what the Chief Architect of the project presents as a 'battle for control of the computing platform.' He describes a new computing architecture based on the many-core paradigm with super-wide execution units, and the reasoning behind some of the design choices. Looks like computer scientists and software programmers everywhere will have to adapt to these new concepts, as there will be no silver bullet to achieve high efficiency on new and exotic architectures."
This discussion has been archived. No new comments can be posted.

Intel Reveals the Future of the CPU-GPU War

Comments Filter:
  • Sure there is (Score:5, Insightful)

    by Watson Ladd ( 955755 ) on Wednesday April 11, 2007 @07:17PM (#18696319)
    Abandon C and Fortran. Functional programing makes multithreading easy and programs can be written for parallel execution with ease. And as an added benefit, goodbye buffer overflows and double frees!
  • Cell (Score:3, Insightful)

    by Gary W. Longsine ( 124661 ) on Wednesday April 11, 2007 @07:20PM (#18696343) Homepage Journal
    The direction looks similar to the direction the IBM Power-based Cell architecture is going.
  • Astroturf (Score:5, Insightful)

    by Anonymous Coward on Wednesday April 11, 2007 @07:22PM (#18696361)

    Arun Demeure writes "Beyond3D has once again obtained new information...

    If you are going to submit your own articles [beyond3d.com] to Slashdot, at least have the decency to admit this instead of talking about yourself in the third-person.

  • by Pharmboy ( 216950 ) on Wednesday April 11, 2007 @07:27PM (#18696399) Journal
    Itanium?
  • Re:Sure there is (Score:5, Insightful)

    by QuantumG ( 50515 ) <qg@biodome.org> on Wednesday April 11, 2007 @07:34PM (#18696491) Homepage Journal
    Cool, with that kind of benefit, I'm sure you can point to some significant applications that have been written in a functional language which have been written for parallel execution.

    This kind of pisses me off. People who are functional programming ethusists are always telling other people that they should be using functional languages but they never write anything significant in these languages.
  • Re:yay (Score:5, Insightful)

    by Anonymous Coward on Wednesday April 11, 2007 @07:38PM (#18696517)
    Funny, I wouldn't consider a mobo without because Intel are working towards an open source driver. I'm sick of binary drivers and unfathomable nvidia error messages. At least Nvidia expend some effort, ATI are a complete joke. Even on windows ATI palm you off with some sub-standard media player and some ridiculous .NET application that runs in the taskbar (What fucking planet are those morons on?)

    So you can bash intel graphics all you like but for F/OSS users they could end up as the only game in town. We're not usually playing the latest first person shooters, performance only need be "good enough".

  • Re:Sure there is (Score:5, Insightful)

    by ewhac ( 5844 ) on Wednesday April 11, 2007 @07:38PM (#18696527) Homepage Journal
    Cool! Show me an example of how to write a spinning OpenGL sphere with procedurally-generated textures and reacts interactively to keyboard/mouse input in Haskell, and I'll take a serious whack at making a go of it.

    Extra credit if you can do transaction-level device control over USB.

    Schwab

  • by Anonymous Coward on Wednesday April 11, 2007 @07:39PM (#18696539)
    This is a valid criticism and comment.
    The 950 is barely passable, especially with Vista.
    Not really Intel's fault. Their target was the "barely passable" segment, leaving the real GPU makers the rest of the field. Probably Intel's main reason to offer this was a need by the OEMs for Intel to have a 1-stop shopping solution.

    My Dell has the 950 and Vista Business and I wish I had upgraded to a more powerful GPU.

    BTW, I am not the same AC as the original post.

  • Re:Sure there is (Score:3, Insightful)

    by QuantumG ( 50515 ) <qg@biodome.org> on Wednesday April 11, 2007 @08:09PM (#18696827) Homepage Journal
    No, dude, he's asking you to solve real problems using your functional language which you claim to be so much better at solving real problems. And, as usual, the response from the functional programming crowd is to point at supposed case studies that no-one can verify or to point at contrived benchmarks.
  • Re:Sure there is (Score:2, Insightful)

    by Bill Barth ( 49178 ) <bbarthNO@SPAMgmail.com> on Wednesday April 11, 2007 @08:11PM (#18696847)
    If you read the paper you linked to below, you'll find that Google's Mapreduce language is implemented as a C++ library. Specifically, check out Appendix A of thier paper.
  • Re:Sure there is (Score:3, Insightful)

    by beelsebob ( 529313 ) on Wednesday April 11, 2007 @08:13PM (#18696861)
    Actually, no... That's exactly the point here. These chips are so comlex to think about that a human can't possibly juggle it all in their head. A good human will *never* be as good as a compiler for these chips, and good functional languge compiler for these chips will almost always be better than a good procedural language compiler.
  • Re:Sure there is (Score:5, Insightful)

    by AstrumPreliator ( 708436 ) on Wednesday April 11, 2007 @08:16PM (#18696895)
    I couldn't find anything related to procedurally-generated textures, not that I really looked. I could find a few games written in Haskell though. I mean they're not as advanced as a spinning sphere or anything like that...

    Frag [haskell.org] which was done for an undergrad dissertation using yampa and haskell to make an FPS.
    Haskell Doom [cin.ufpe.br] which is pretty obvious.
    A few more examples [cin.ufpe.br].

    I dunno if that satisfies your requirements or not. Though I don't quite get how this is relevant to the GP's post. This seems like more of a gripe with Haskell than anything. But if I've missed something, please elaborate.
  • Re:Sure there is (Score:5, Insightful)

    by ewhac ( 5844 ) on Wednesday April 11, 2007 @08:17PM (#18696907) Homepage Journal

    Functional languages will let us utilize multiple cores without the headaches and performance is acceptable, to claim otherwise is plain short-sighted.

    I've done some rudimentary reading on functional programming languages -- mostly Haskell and LISP (which is sorta FP) -- and I believe you when cite all the claimed benefits. The architecture of the languages certainly enables it.

    However, every time I've tried to get a handle on Haskell, all the examples presented tend to be abstract. In other words, they contrive a problem that Haskell is fairly well-suited to solving, and then write a solution in Haskell, using data structures and representations entirely internal to Haskell. "Poof! Elegance!" Well, um...

    I'm a gaming, graphics, and device driver geek, and so my explorations of new stuff tend to lean heavily in that direction. I'm interested in more "concrete" expressions of software operation. Could Haskell offer new or interesting possibilities in network packet filtering? Perhaps, but first you have to read reams of text on how to bludgeon the language into reading and writing raw bits.

    The other issue with FP is that they tend to treat all problems as a collection of simultaneous equations -- things that can be evaluated at any time in any order. There's a huge class of computing problems that can't be described that way. You can't unprint a page on the line printer. There are facilities for sequencing/synchronizing operations (Haskell's monads, for instance), but I get the impression that FP's elegance starts to fall apart when you start using them.

    Understand that my exposure to FP in general and Haskell in particular is less than perfunctory, and am very likely misunderstanding a great deal. I'd like to learn and understand more about FP, but so far I haven't encountered the "Ah-hah!" example yet.

    Schwab

  • If intel keeps supporting its equipment with excellent OSS support, I'll happily switch to an all-intel platform, even at a significant premium.

    NVIDIA's Linux drivers are pretty good, but ATI/AMD's are god awful, and both NVIDIA's & AMD/ATI's are much more difficult to use than Intels.

    I'd love to see an Intel GPU/CPU platform that was performance competitive with ATI/AMD or NVIDIA's offerings.
  • Good for linux (Score:3, Insightful)

    by cwraig ( 861625 ) on Wednesday April 11, 2007 @08:34PM (#18697045) Homepage
    If Intel start making graphics card with more power to compete with nvidia and ati there they will find a lot of Linux support as they are the only ones which currently have open source drivers http://intellinuxgraphics.org/ [intellinuxgraphics.org] I'm all for supporting Intel move into graphics cards as long as they continue to help produce good linux drivers
  • Re:Sure there is (Score:3, Insightful)

    by Bill Barth ( 49178 ) <bbarthNO@SPAMgmail.com> on Wednesday April 11, 2007 @08:42PM (#18697107)
    There's no MapReduce compiler. The programmer writes C++. So, at best, the programmer is the functional language compiler, and he has to translate his MapReduce code into C++.

    Again, no one disagrees with your idea that writing in a functional style is a good idea for parallel programming, but the OP said that we should give up on two specific languages and pick up a functional one. Clearly a program in a functional style can be written in C++ (which is a superset of C89, more or less, which is one of the two languages mentioned by the OP). The challege to the OP was to show the world a massively parallel program of significance written in a functional language, not one written directly in a procedural language but in a functioal style. You showed us the latter, we'd still like to see the former.

  • Re:Sure there is (Score:2, Insightful)

    by Anonymous Coward on Wednesday April 11, 2007 @09:09PM (#18697265)
    If you actually read that paper, you will notice from the code snippet at the end that map reduce is a C++ library. So it kind of proves the exact opposite of what you intended: people are doing great stuff with the languages that you are saying should be dropped.
  • by Bozdune ( 68800 ) on Wednesday April 11, 2007 @09:16PM (#18697313)
    Functional languages are nicely parallelizable because they don't have side effects. Unfortunately, real life is full of side effects. So, a pure functional language has to "hack" the side effect by passing it around everywhere as a closure. That gets old really, really quickly. Which is why useful functional languages contain constructs with side-effects (not without accompanying hand-wringing from purists).

    Back in the 70's, people like Jack Dennis used to promise the DARPA that they could parallelize the old Fortran code used to do complex military simulations by converting the Fortran code to a pure functional language. It would be wonderful! Well, they couldn't, and it wasn't.

    The above notwithstanding, IF you can coerce a problem into a form in which a functional language can be effectively employed, the benefits can be huge. The code tends to be more elegant and more readable; algorithms that would be difficult to write in an applicative language like C become easy; data structure manipulation is trivial; and so on. Arguments that functional languages are "slow" have been debunked. Arguments that functional languages must be interpreted are wrong.

    And, all the syntactic nonsense of C++ and the rest of the "object oriented" languages can be (mercifully) shed. Pure functional languages are object oriented by nature. However, functional languages do have their own idiosyncracies, such as the infamous Lisp "quote", and implementation-dependent funarg problems. So there are cobwebs still.

    To sum up: If you have a hard algorithmic problem to solve, a functional language will probably be a better choice, even if you end up re-coding the algorithm in an applicative language later. If you have a device driver to write, though, roll up your sleeves and get out the C manual. But first: make sure to put a debug wrapper around your mallocs (and pad your malloc blocks with patterns on both sides) so you can trap double-frees, underwrites, and overwrites. It will pay many dividends.

  • Re:Great! (Score:3, Insightful)

    by jmorris42 ( 1458 ) * <{jmorris} {at} {beau.org}> on Wednesday April 11, 2007 @09:47PM (#18697505)
    > So when Intel decides that it's time to implement new architectures and force new methods of coding it's an awesome thing, ....

    Except when it ain't. Lemme see, entire new programming model..... haven't we heard this song before? Something about HP & Intel going down on the Itanic? Ok, Intel survived the experience but HP is pretty much out of the processor game and barely hanging in otherwise.

    Yes it would be great if we could finally escape the legacy baggage of x86, but it ain't going to happen anytime soon. The pain isn't there yet that would convince people to migrate.

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...