Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Businesses Technology

Intel Reveals the Future of the CPU-GPU War 231

Arun Demeure writes "Beyond3D has once again obtained new information on Intel's plans to compete against NVIDIA and AMD's graphics processors, in what the Chief Architect of the project presents as a 'battle for control of the computing platform.' He describes a new computing architecture based on the many-core paradigm with super-wide execution units, and the reasoning behind some of the design choices. Looks like computer scientists and software programmers everywhere will have to adapt to these new concepts, as there will be no silver bullet to achieve high efficiency on new and exotic architectures."
This discussion has been archived. No new comments can be posted.

Intel Reveals the Future of the CPU-GPU War

Comments Filter:
  • Great! (Score:3, Informative)

    by Short Circuit ( 52384 ) * <mikemol@gmail.com> on Wednesday April 11, 2007 @06:15PM (#18696287) Homepage Journal
    As I recall, AMD's Athlon beat out the competing Intel processor in per-clock performance, partially as a result of having a more superscalar architecture. It's nice to see that, with the NetBurst architecture dead, Intel's finally taking an approach that's expandable and extensible.

    The CPU wars have finally gotten interesting again. I'm going to go grab some popcorn.
    • Re:Great! (Score:4, Interesting)

      by JordanL ( 886154 ) <jordan,ledoux&gmail,com> on Wednesday April 11, 2007 @06:25PM (#18696385) Homepage
      So when Intel decides that it's time to implement new architectures and force new methods of coding it's an awesome thing, but when Sony does it people tell them to stop trying to be different... I know people will cry about the console market being different, but the principals of the decisions are the same. If people cried about the Cell I expect them to cry about Intel's new direction. And this had to be said... I have Karma to burn.
      • Re: (Score:3, Funny)

        by nuzak ( 959558 )
        Intel doesn't go around telling us that their design will push 17 hojillion gigazoxels, with more computing power than Deep Blue, HAL, and I AM put together, in order to render better-than-real detail in realtime while simultaneouslly giving you a handjob and ordering flowers for your gf.
      • Re: (Score:2, Interesting)

        I thought Sony's processor design was awesome, and I still do.
        • Re:Great! (Score:5, Informative)

          by strstrep ( 879828 ) on Wednesday April 11, 2007 @08:08PM (#18697255)
          It's a good design, it just doesn't seem like a good design for a video game system. It's a general purpose CPU attached to several CPUs that essentially are DSPs. DSP programming is very weird, and you need to at least understand how the device works on the instruction level for optimal performance. A lot of DSP code is still written in assembly (or at the very least hand-optimized after compilation).

          It's very expensive to have DSP code written, when compared to normal CPU code, and video game manufacturers have been complaining that the cost of making a game is too high. Also, most of the complexity in a video game nowadays is handled by the GPU, not the CPU. Now the cell would be great for lots of parallel signal processing, or some other similar task, and I bet it could be used to create a great video game, it would just be prohibitively expensive.

          The cell is a great solution to a problem. However, that problem isn't video games. A fast traditional CPU, possibly with multiple cores, attached to a massively pipelined GPU would probably work better for video games.
        • The Cell would have been nice were it not for the surprisingly slow DMA read problem.

          Since Sony announced the oddity last June (verbatim: "~16MB/s (no, this is not a typo)"), I presume it means all PS3 Cell CPUs past and present will carry it on... or at the very least, game developers will not be allowed to use the extra bandwidth should DMA reads be fixed to maintain compatibility with first-gen PS3s.
      • Re: (Score:3, Insightful)

        by jmorris42 ( 1458 ) *
        > So when Intel decides that it's time to implement new architectures and force new methods of coding it's an awesome thing, ....

        Except when it ain't. Lemme see, entire new programming model..... haven't we heard this song before? Something about HP & Intel going down on the Itanic? Ok, Intel survived the experience but HP is pretty much out of the processor game and barely hanging in otherwise.

        Yes it would be great if we could finally escape the legacy baggage of x86, but it ain't going to happen
      • The point of the presentation was that Intel's proposal is not that different from today's x86; it's a cache-coherent SMP. Really the only new feature is 16-wide SIMD rather than 4-wide in SSE today, so you just unroll your inner loops four times as much. But Cell is totally different from traditional architectures because the SPUs have no caches.
      • by Bodrius ( 191265 )
        Did 'people' really cry about the Cell that much?

        I think some programmers have suggested that perhaps, as a possibility, changing the programming model to something far more difficult to exploit was not the smartest choice in a competitive console market.

        But I don't think anyone really 'cried' about it. They just chose not to invest on it, yet.

        Even the skeptics seemed to think Cell was pretty cool by itself (me included).

        What people 'cried about' was the price tag, the misleading trailers / screenshots, the
      • Re: (Score:3, Funny)

        by Fred_A ( 10934 )

        I know people will cry about the console market being different, but the principals of the decisions are the same.
        No, it's quite different, the console market is indeed different since it's more like high school. In the PC market we don't have principals, we have managers.
  • yay (Score:2, Interesting)

    by Anonymous Coward
    Maybe they will ditch the shiatty 950 graphics chip, that is all too common in notebook computers
    • Re: (Score:3, Informative)

      by ewhac ( 5844 )
      The 945/950 GMCH is common in notebooks because it's easy to implement (Intel's already done almost all the work for you), it's fairly low-power and, most important of all, it's cheap.

      Schwab

    • Re:yay (Score:5, Insightful)

      by Anonymous Coward on Wednesday April 11, 2007 @06:38PM (#18696517)
      Funny, I wouldn't consider a mobo without because Intel are working towards an open source driver. I'm sick of binary drivers and unfathomable nvidia error messages. At least Nvidia expend some effort, ATI are a complete joke. Even on windows ATI palm you off with some sub-standard media player and some ridiculous .NET application that runs in the taskbar (What fucking planet are those morons on?)

      So you can bash intel graphics all you like but for F/OSS users they could end up as the only game in town. We're not usually playing the latest first person shooters, performance only need be "good enough".

      • Exactly my thought - I have never had problems with any of the laptops with Intel graphics chipsets and Linux. In fact, Fedora pretty much kicks ass with the GMA950 on my Macbook as opposed to my desktop with an ATI card that has 2 DVI outputs (can never get both outputs to work at the same time in Fedora, but I freely admint I am probably screwing up the video configuration).
        • by Ajehals ( 947354 )
          ahem, mail me your X configuration and I'll sort it for you if you want (or have a damn good go) - what you describe has become something of a regular occurrence in my part of the world, my email is shown but replace the domain with gmail.com as I'm out and about at the moment.

          As for intel graphics on notebooks, I agree, there is nothing like having a component in a notebook where you don't have to worry about it being some bizarre non standard randomly hacked (or firmware crippled), especially when you are
  • Sure there is (Score:5, Insightful)

    by Watson Ladd ( 955755 ) on Wednesday April 11, 2007 @06:17PM (#18696319)
    Abandon C and Fortran. Functional programing makes multithreading easy and programs can be written for parallel execution with ease. And as an added benefit, goodbye buffer overflows and double frees!
    • Re:Sure there is (Score:5, Insightful)

      by QuantumG ( 50515 ) <qg@biodome.org> on Wednesday April 11, 2007 @06:34PM (#18696491) Homepage Journal
      Cool, with that kind of benefit, I'm sure you can point to some significant applications that have been written in a functional language which have been written for parallel execution.

      This kind of pisses me off. People who are functional programming ethusists are always telling other people that they should be using functional languages but they never write anything significant in these languages.
      • Re: (Score:3, Interesting)

        by beelsebob ( 529313 )
        How about google search? That a big enough exmpe for you? It's written using a functional library called Map reduce.
        • by QuantumG ( 50515 )
          Proprietary software doesn't count.. why? Cause no-one can see the benefits of using a functional language except the priveleged few who have access to the source code. So.. can you please name a large open source program written in a functional language which benefits from this supposed ease of parrallelisation that is being claimed here. Or are we just supposed to take your word for it?

          • How about perl 6's compiler -- pugs is written entirely in Haskell.
            • by QuantumG ( 50515 )
              and the second part of my question? Is this multithreaded? Where's the benefits being claimed?

              • Re: (Score:3, Interesting)

                by beelsebob ( 529313 )
                Can you show me any open source project where massive parallelism is being exploited? I'm not sure I can think of any.

                In the mean time, you were given a nice example -- ATC systems and telephone exchanges, and in the mean time, you can have a research paper about map reduce [216.239.37.132] -- if you don't belive me, belive the peer reviewed research.

                • Re: (Score:2, Insightful)

                  by Anonymous Coward
                  If you actually read that paper, you will notice from the code snippet at the end that map reduce is a C++ library. So it kind of proves the exact opposite of what you intended: people are doing great stuff with the languages that you are saying should be dropped.
                • Can you show me any open source project where massive parallelism is being exploited? I'm not sure I can think of any.

                  BOINC [berkeley.edu]?
                • Wouldn't Apache's server thread pool be an example? And surely there's some open source version of seti@home (possibly seti itself?)
            • Re:Sure there is (Score:4, Informative)

              by Fnord ( 1756 ) <joe@sadusk.com> on Wednesday April 11, 2007 @07:28PM (#18696985) Homepage
              A perl6 *interpreter* was written in haskell, and it's considered a non-performance oriented reference implementation, purely for exploring the perl6 syntax. No one has ever doubted that interpreters and compilers are easier to do in functional languages. One of the things you learn first when you take a class in lisp is a recursive descent parser. But the version of perl6 that's expected to acutally perform? Parrot is written in C. The fact that its no where near done is a completely different matter...
          • Proprietary software doesn't count.. why? Cause no-one can see the benefits of using a functional language except the priveleged few who have access to the source code.

            And just how many open source applications that rely on parallelisation (and we won't count any kind of parallelisation where there's mostly no coordination required between threads/processes, such as Apache) have been written recently?

            • by QuantumG ( 50515 )
              A lot.. but not as many as I would expect have been written in functional languages.. seeing as they are so much easier to write multithreaded apps in.

              Look, I don't think I'm asking something too unreasonable here. If functional languages are so much easier to write multithreaded apps using, then show me. Either point at something that already exists which demonstrates this claim or write something new that demonstrates this claim.

              The claim has been made, back it up.

              • Look, I don't think I'm asking something too unreasonable here.

                By limiting to open source only, I think you are.

                Let's take the classic example of a difficult multi-threading problem: a DBMS. It's not completely separable, like an FTP server, and it's not effectively serialised, like an X11 server.

                Have people written highly-threaded DBMSes in functional languages? Sure. [erlang.org] Are they open source? Probably not. There are a few decent open source DBMSes, so there's really no itch to scratch. (So why was Mn

                • by QuantumG ( 50515 )
                  Excuse me.. but you are trying to justify to me that it is better to write certain forms of software in functional languages, but you're not willing to show me the code that is written in that way so that I can personally evaluate whether or not the code is "better". So basically you're asking me to take it on faith. For all I know the program you are claiming is written in a functional language might not be.. or may be so convulted and unmaintainable that it doesn't matter how well it stacks up in benchm
                  • Excuse me.. but you are trying to justify to me that it is better to write certain forms of software in functional languages, but you're not willing to show me the code that is written in that way so that I can personally evaluate whether or not the code is "better".

                    First off, I didn't make the original claim.

                    Secondly, I agree with you that if you can't see the source code, it's not science.

                    But thirdly, I still think you're being unfair. You don't need to see the plans for a Boeing aircraft and a Tupole

              • With respect to the topic, I don't think multithreaded apps are what's important here. Distributed apps are probably more important. The sub processors in Cell don't have shared memory, after all. I think your requirement for multithreadedness might be slightly bogus, and the technologies that are in use now for computational clusters may fare better when used in alternative GPU designs.
          • How about Hadoop [apache.org]?

            Seriously though, looking at the Debian compiler shootout, ocaml does fairly well at CPU intensive applications. In the past I've seen some applications claimed to be written in it (unison), but I can't find a decent list of them, and I don't remember any being parallel / distributed. Yahoo! stores was apparently written in Lisp, and Paul Graham won't shut up about it.

            But the GGP is talking about extending such languages, already sparsely in use, to distributed computations. Erlang is the p
          • "open Source" is not a factor here. No one would write open source software that takes advantage of hardware that no one owns. It you are witting software that is to runs one 100+ core machine you are likely getting paid by the people who own the huge room full of equipment.

            Last I checked they were running software to compute aerodynamic loads on space lift boosters on the cluster. It's one of those jobs that just runs for days and weeks even on racks of dual CPU linux boxes. It was an optimization se
        • Re: (Score:2, Insightful)

          by Bill Barth ( 49178 )
          If you read the paper you linked to below, you'll find that Google's Mapreduce language is implemented as a C++ library. Specifically, check out Appendix A of thier paper.
          • Yes, indeed you will -- if you read up on any functional language you'll always discover that they re eventually based on procedural processes -- after all, our microprocessors are all procedural engines. The library itself is functional, and that, as the paper points out is where they get the speed -- not in the impementation detail that the library is written procedurally.
            • Re: (Score:2, Interesting)

              by Bill Barth ( 49178 )
              Yes, but if you look at the code in Appendix A, it's written in C++. Yes, it calls a the MapReduce library which is functional in style, but the user still must write their code in an essenitally procedure language. Yes, the library hides a lot from the user, but so what? They still have to write their code in C++! The OP exhorted us to "abandon C and Fortan," but you're touting a C++ class library as an example of a win for functional languages. I assume you can see why we might object!

              Of course Google'

              • On the contrary -- this is a functional language -- the reason this can work so fast and be so parallel is that they have referentail transparency, and that they can do lots of things without them interfereing with each other. That's because they're writing functionally. It doesn't matter whether the translation happens into a high level language (like C++) or a low level one when the compiler transltes it into machine code.

                The bottom line is that until we change architecture, we have to translate out fu

                • Re: (Score:3, Insightful)

                  by Bill Barth ( 49178 )
                  There's no MapReduce compiler. The programmer writes C++. So, at best, the programmer is the functional language compiler, and he has to translate his MapReduce code into C++.

                  Again, no one disagrees with your idea that writing in a functional style is a good idea for parallel programming, but the OP said that we should give up on two specific languages and pick up a functional one. Clearly a program in a functional style can be written in C++ (which is a superset of C89, more or less, which is one of the

                  • I think we've come to a violent agreement then. In the mean time for your real world programs... see lower down for your telecoms systems being functional (erlang), your ATC systems being functional, nd your 3D games being made parallel in functional languages.
                    • Again, as another noted, there's little evidence for parallelism in these programs. I'm not saying there isn't any, it's just that we can't see it.

                      I (as someone who works at a supercomputing center) am still waiting for a parallel weather forcasting code or hypersonic aerothermodynamics code or the like written in one of the many functional languages touted here. I deal with such codes on a daily basis, and they're all written in Fortran, C, and C++ (in that (decreasing) order or occurrence). These are th

                  • No, the implementation is not purely functional.. but what allows the massive parallelism are the ideas and techniques from a functional programming language! In the paper google describes unprocessed chunks.. that's pretty much what a thunk is (unevaluated statement). But they also state the map and reduce functions themselves are written in c++. It's not pure, but it is functional... So we'll call it a tie.
                    • As I keep saying, this whole argument came up b/c the OP told us to drop C and Fortan, but here we are with Google using C++ to do something cool! Doesn't look like there's a need to drop C or Fortran, just a need for some smarter-than-the-average-bear programmers to make some libraries to make everyone else's life easier. Sorry, you can't have your tie. Putting functional ideas in a procedural language is good learning, but it's not a functional language showing off it's ability to handle massive paralleli
                    • So why not call it a tie because C++ will need to/is currently plucking some of the more practical functional ideas just as C++ borrowed certain ideas of OOP from smalltalk et al. Python has borrowed some nice things such as haskell's list comprehensions but it also has generators etc (but no lazy evaluation).
      • Cool, with that kind of benefit, I'm sure you can point to some significant applications that have been written in a functional language which have been written for parallel execution.

        Would the telephone switching systems that Erlang was made for the express purpose of implementing count, or the air traffic control systems also implemented with it, or would it have to be something more of a significant application than that?

        If games are more your thing, how about this [lambda-the-ultimate.org].

        This kind of pisses me off. People who

      • Erlang is used for some massively parallel problems like telephone switches. SISAL outperformed Fortran on some supercomputers. Jane Street Capital uses O'caml for their transaction processing system. Lisp is used in CAD programs. So FP is being used.
      • by slamb ( 119285 ) *

        This kind of pisses me off. People who are functional programming ethusists are always telling other people that they should be using functional languages but they never write anything significant in these languages.

        I think that's true of all people who claim to have a silver bullet. For example, I'll be impressed when the "eXtreme Programming 100% test coverage, no checkins without a new test passing" crowd actually manage to write a kernel like that, including tests demonstrating complex race conditions

    • Re:Sure there is (Score:5, Insightful)

      by ewhac ( 5844 ) on Wednesday April 11, 2007 @06:38PM (#18696527) Homepage Journal
      Cool! Show me an example of how to write a spinning OpenGL sphere with procedurally-generated textures and reacts interactively to keyboard/mouse input in Haskell, and I'll take a serious whack at making a go of it.

      Extra credit if you can do transaction-level device control over USB.

      Schwab

      • Okay then... First you write a simple function that generates the points on the sphere, based on a stream of positions of the pokes, then you write a function that generates the textures, then you use the Open GL libraries to generate the output... Then you sit back and glot, because on these chips, that's gonna run a whole lot faster than your C code.
      • Re:Sure there is (Score:5, Insightful)

        by AstrumPreliator ( 708436 ) on Wednesday April 11, 2007 @07:16PM (#18696895)
        I couldn't find anything related to procedurally-generated textures, not that I really looked. I could find a few games written in Haskell though. I mean they're not as advanced as a spinning sphere or anything like that...

        Frag [haskell.org] which was done for an undergrad dissertation using yampa and haskell to make an FPS.
        Haskell Doom [cin.ufpe.br] which is pretty obvious.
        A few more examples [cin.ufpe.br].

        I dunno if that satisfies your requirements or not. Though I don't quite get how this is relevant to the GP's post. This seems like more of a gripe with Haskell than anything. But if I've missed something, please elaborate.
    • Dude, you think C and Fortran are the main alternatives to functional languages? You're about 20 years out of date! Nowadays, the Big Thing is OOP languages. Everybody programs in C++, Java, or C# these days.

      You do have a point. I wrote the Concurrency chapter in The Java Tutorial (yeah, yeah, it wasn't my idea to assign a bunch of tech writers to write what's essentially a CS textbook), struggled for 15 pages just to discuss the basics of the topic, and ran out of time to cover more than half of what I sho
      • Re: (Score:3, Interesting)

        by Coryoth ( 254751 )

        You do have a point. I wrote the Concurrency chapter in The Java Tutorial, struggled for 15 pages just to discuss the basics of the topic, and ran out of time to cover more than half of what I should have. Most of what I wrote was about keeping your threads consistent, statewise.

        Ideally I would like to see Java take up something like this [inf.ethz.ch] as a means to handle concurrency. It is simple, easy to understand, easy to reason about concurrent code, and doesn't require stepping very far outside standard stateful OO programming methods. Whether that will actually happen is, of course, another question, but something along those lines does offer a good way forward.

      • Dude, you have the Abstract Math gene. Most of us don't.

        An "abstract math gene" probably isn't the issue. It's just an issue of practice - most good programmers today have many years of experience working with object oriented and procedural programming languages. Functional programming is different, and it takes practice to get used to - years of practice to be as comfortable as what you're used to.

        Further, a pure functional system is utterly useless since IO is a side effect. This doesn't change the fact

    • by Coryoth ( 254751 )

      Abandon C and Fortran. Functional programing makes multithreading easy and programs can be written for parallel execution with ease. And as an added benefit, goodbye buffer overflows and double frees!

      Functional languages seem to regularly get trotted out when the subject of multi-cores and multi-threading comes up, but it really isn't a solution -- or, at least, it isn't a solution in and of itself. If you program in a completely pure functional manner with no state then, yes, things can be parallelised. The reality is that isn't really a viable option for a lot of applications. State is important. Functional languages realise this, and have ways to cope. With the ML family you get a little less purity

    • Re: (Score:3, Interesting)

      Functional programming makes multithreading easy, but multithreading != vectorization, and vectorization is the bulk of what is needed to take advantage of this type of processsor. I'm yet to encounter a language as suited to talking about vector code as Fortran is, sad as that may be.
  • Cell (Score:3, Insightful)

    by Gary W. Longsine ( 124661 ) on Wednesday April 11, 2007 @06:20PM (#18696343) Homepage Journal
    The direction looks similar to the direction the IBM Power-based Cell architecture is going.
    • That was my thought too. Multiple cores with a handful of specific cores.

      IBM's cell processor would be a lot more useful in general if it was slightly modified. replace one or two aux cores with GPu cores. 2-4 as more general purpose cores, and one or two FPGA style cores, possibly with preset configuration options(ie audio processing, physics, video, etc).

      Complicated, yes. But can you imagine what one could do. Your single computer could be switched on the fly to encode an audio and video stream far
  • Astroturf (Score:5, Insightful)

    by Anonymous Coward on Wednesday April 11, 2007 @06:22PM (#18696361)

    Arun Demeure writes "Beyond3D has once again obtained new information...

    If you are going to submit your own articles [beyond3d.com] to Slashdot, at least have the decency to admit this instead of talking about yourself in the third-person.

  • future of computing? (Score:2, Interesting)

    by jcgf ( 688310 )
    I'm just waiting till they come out with a complete single chip PC (I know there are examples but they aren't spectacularly performing). Just enough PCB for some external connectors and some voltage regulation.
  • by jhfry ( 829244 ) on Wednesday April 11, 2007 @06:22PM (#18696369)
    I don't know what it is, or how it will be different from x86, but progress can't keep continuing if we don't look for better methods of doing things.

    It cannot be argued that x86 is best architecture ever made, we all know it's not... but it is the one with the most research. We need the top companies in the industry, Intel, AMD, MS, etc. to sit down and design an entirely new specification going forward.

    New processor architecture, a new form factor, a new power supply, etc...

    Google has demonstrated that a single voltage PSU is more efficient, and completely do able. There is little reason that we still use internal cards to add functionality to our systems, couldn't these be more like cartridges so you don't need to open the case?

    Why not do away with most of the legacy technology in one swoop and update the entire industry to a new standard.

    PS, I know why, money, too much investment in the old to be worth creating the new. But I can dream can't I?
    • by Pharmboy ( 216950 ) on Wednesday April 11, 2007 @06:27PM (#18696399) Journal
      Itanium?
    • Once you find something that works one magnitude (10x) better than the old technology, people will adapt it.
    • Why not do away with most of the legacy technology in one swoop and update the entire industry to a new standard.

      I agree. However, one should design a new CPU architecture based on a software model, not the other way around as was done with the CELL cpu.

      PS, I know why, money, too much investment in the old to be worth creating the new. But I can dream can't I?

      Yes you can dream. But unless the new architecture is going to solve a big problem in the industry, it's not worth it. The biggest problem in the comp
  • by mozumder ( 178398 ) on Wednesday April 11, 2007 @06:33PM (#18696479)
    Basically put in dozens of slow low IPC, but area-efficient, processors per CPU. Later on, throw in some MMX/VLIW style instructions to optimize certain classes of algorthims.

    The first Niagara CPUs were terrible at floating point math, so they were only good for web-servers. The next generation I hear are supposed to be better at FPU ops.

  • And around the world a million tinfoil hats rejoiced.
  • It's a bad move on intel's part. Many common programs don't even make full use of multi-core, extended instruction sets and 64-bit. If they are relying on something exotic to put them ahead... it just isn't going to work out, unless they think up a method to run un-edited code optimized for their exotic architectures.
  • If intel keeps supporting its equipment with excellent OSS support, I'll happily switch to an all-intel platform, even at a significant premium.

    NVIDIA's Linux drivers are pretty good, but ATI/AMD's are god awful, and both NVIDIA's & AMD/ATI's are much more difficult to use than Intels.

    I'd love to see an Intel GPU/CPU platform that was performance competitive with ATI/AMD or NVIDIA's offerings.
    • I'm skeptical. The last time I saw this much hype about Intel graphics was when they tried to make their own card in conjunction with the introduction of AGP. Regarding that first chip, I seem to recall them saying that huge on-board memory for textures was a waste of memory chips. They quickly lost that round because that was a mistaken assumption. Even their integrated chips are sub-par compared to the integrated chips of other makers.
  • Easy research!
  • Good for linux (Score:3, Insightful)

    by cwraig ( 861625 ) on Wednesday April 11, 2007 @07:34PM (#18697045) Homepage
    If Intel start making graphics card with more power to compete with nvidia and ati there they will find a lot of Linux support as they are the only ones which currently have open source drivers http://intellinuxgraphics.org/ [intellinuxgraphics.org] I'm all for supporting Intel move into graphics cards as long as they continue to help produce good linux drivers
  • We have been hearing about digital convergence forever, but most people want a separate computer from cellphone or TV. Processors from Intel itself still have completely separate sets of instructions for integers and for floating point. In the same vein, even if Intel's architecture is possible, it will be less upgradable, more difficult to program for and have less backward compatibility overtime than a set of components with well defined functions.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...