Future of 3d Graphics 292
zymano writes "Extremetech
has this nice article on the future of 3d graphics. The article also mentions that graphic card gpus can be used for non-traditional powerful processing like physics. A quote from the article, "GPU can be from 10 to 100 times faster than a Pentium 4 and Scientific computations such as linear algebra, Fast Fourier Transforms, and partial differential equations can benefit". My question - If these cards are getting so powerful at computations then why do we need a Intel/AMD processor at all? Just make a graphics card with more transistors and drop the traditional processor..."
No processor. (Score:3, Informative)
Apple have done this several years ago. The Newton 2000 and 2100 didn't use a CPU but rather the graphics processor.
Re:No processor. (Score:5, Insightful)
That is as soon as there is no CPU and the GPU handles its tasks it becomes a CPU by definition!
Re:No processor. (Score:2)
Re:No processor. (Score:2)
Re:No processor. (Score:2)
That's pretty cool. I haven't done much ASM for the StrongARM, so I don't know as much about its internals as I should.
Re:No processor. (Score:2)
Re:No processor. (Score:2)
It's an ARM derivative from Digital and it is not a graphics processor but a genuine CPU.
The head of Nvidia agrees with the poster (Score:4, Interesting)
He was like, our cards ARE the computer, and are becoming far more important then the CPU for the hard core stuff.
It was interesting, but I totally foo fooed it.
obviously he was smarter then me.
Re:The head of Nvidia agrees with the poster (Score:2, Interesting)
I'm very interested in the answer...
Thanks
Re:The head of Nvidia agrees with the poster (Score:5, Informative)
These programs can be given to the GPU via specialized low-level assembly language that has been developed to expose the programmability of GPUs. (They are pretty clean, RISC-like instruction sets).
Alternatively, you can use a higher level programming language, like NVIDIA's Cg, or Microsoft's HLSL, to write programs to run on the GPU. These are somewhat C-like languages that then compile down to those GPU assembly instruction sets.
Re:The head of Nvidia agrees with the poster (Score:5, Informative)
Interesting to compare the 3DFX perspective... (Score:4, Informative)
Interestingly he thinks it'll be specialized hardware that will do ray-tracing, etc.
http://www.hardwarecentral.com/hardwarecentral/rev iews/1721/1/ [hardwarecentral.com]
"Is there a future for radiosity lighting in 3D hardware? Ray-tracing? When would it become available?
Gary: Yes, but probably just in specialized hardware as it's a very different problem. Ray-tracing is nasty because of it's non-locality, so fast localized hacks will probably prevail as long as people are clever. Especially for real-time rendering on low-cost hardware. It's interesting that RenderMan has managed to do amazing CGI without ray-tracing. That's an existence proof that a hack in the hand, is worth ray-tracing in the bush.
Oh... and for people who haven't seen it before, here's a cool detailed paper about how the pipeline of a traditional 3d accellerators can be tweaked used to do ray tracing...
http://graphics.stanford.edu/papers/rtongfx/rtongf x.pdf [stanford.edu]
Reading that shows how programming a graphics pipeline is quite different (more interesting? more complicated?) than programming a general purpose CPU.
We need traditonal processors (Score:2, Informative)
Re:We need traditonal processors (Score:2)
Sure you'd be able to do some hellish good data transforms, and perhaps a 'CPU' with a dozen of these Altivec units could crunch through some RC5-72 units like crazy, but not much else!
Umm....GPU?!? (Score:4, Funny)
...Huh?
Graphics? I still use a VT100
Re:Umm....GPU?!? (Score:2, Funny)
Re:Umm....GPU?!? (Score:5, Funny)
In GPU's Soviet Russia, pixel pushes you!
In GPU's Soviet Russia, graphics card displays you!
In GPU's Soviet Russia, jaggies smooth you!
In GPU's Soviet Russia, adventure game pixel hunts you!
Re:Umm....GPU?!? (Score:2)
I should work at a library reference desk or something.
Re:We need traditonal processors (Score:5, Informative)
> It might take a real long time, but it is a general purpose processor and so can process anything
The same holds true for GPUs. Like CPUs, they are turing complete [wikipedia.org].
Re:We need traditonal processors (Score:3, Insightful)
Also known as vector processors.
The more things change...
Re:We need traditonal processors (Score:3, Interesting)
Re:We need traditonal processors (Score:2)
Re:We need traditonal processors (Score:2)
Re:We need traditonal processors (Score:3, Informative)
Yes they are. DX9 class GPUs are turing complete. Given enough cycles they can solve any problem you give them. Just like a CPU.
Re:We need traditonal processors (Score:2, Informative)
Until NV35, GPU's didn't even have flow control. As of today, the largest GPU program you can have is 1024 instructions (or 2048?). Either way, freakin' small.
GPU's are essentially glorified math co-processors with a crap-ton of memory bandwidth. Instead of focusing on the square roots and cosines, they focus on the dot products and matrix transforms.
We could remove the traditional processor if... (Score:3, Insightful)
Precision (Score:3, Informative)
Simulating higher precision with single precision arithmetic is possible, but the performance penalty is too severe for it to be useful.
Re:Precision (Score:5, Informative)
Re:Precision (Score:4, Informative)
Re:Precision (Score:5, Insightful)
For some of the stuff that we do, we would kill for a slightly faster card. Right now, for simulation of IR imagery, we have to prefly a scenario where the sensor-carrying vehicle (use your imagination) flys a trajectory and we render the imagery along this path. This rendering consists of doing convolutions of background scenes with target information to generate a final image. At the end we have a 'movie'. This can take a few hours to run.
Afterwards, we run the simulation in realtime and play frames from this movie (adjusted in rotation and scaling, etc. because real-time interactions can result in flight paths subtly different from the movie) and show it to a *real* sensor and see what happens.
The point: if we could do real time convolution inside a graphics card and then get the data back out some way (we usually need to go through some custom interface to present the data to the sensor), then a lot of pain would be saved. First, we could move the video-generating infrastructure into the real time simulation, which would be simpler, we wouldn't have to worry about rotating and scaling the result since we'd be generating exactly correct results in the fly, we wouldn't have to worry about allocating huge amounts of memory (Gigabytes) to hold the video and all the concerns about memory latency and bandwidth and problems with NUMA architectures, and finally (maybe) we could change scenarios on the fly without having to worry about whether we already had a video ready to use.
I think the computational horsepower is almost there, but right now there's no good way to get the data back out of the card. On something like an SGI you get stuff after it's gone through the DACs, which mean you now have at most 12-bits per channel (less than we want, although you can use tricks for some stuff to get up to maybe 16-bits for pure luminosity data). What would be sweet in the extreme is to get a 128-bit floating point value of each pixel in the X*Y pixel scene. So if the scene were 640x480 then we'd get about 4.5Meg of data per frame at say 60Hz then we'd get about 281Meg a second to convert and send out.
Life would be sweet. Sadly, this is a pretty special purpose application, so I'm not too hopeful. What's weird is that only NVidia (and perhaps ATI) are coming up with this horsepower because of all the world's gamers, and vendors like SGI are left with hardware that is many, many generations old (although it does have the benefit of assloads of texture memory).
In short: need 1GB of RAM on the card and a way to get stuff back out after we've done the swoopty math.
Re:Precision (Score:2)
Re:Precision (Score:2)
I'm not even sure that the 32-bit floating points value used in the GPU are real IEEE 32-bit floating points value (must have specific handling of very small values for example), I suspect videocard makers cut corners in their implementations which lower precision..
Re:Precision (Score:2)
Old news... (Score:5, Insightful)
Good for the end user, but going to be a pain in the ass for software developers to take advantage of, is my guess.
Re:Old news... (Score:2)
Re:Old news... (Score:2)
Kids, these days! Real developers love a good PITA, their whole life is just a PITA, in fact if you ever see someone on a bike without a seat you can be sure this is either an MFC or Kylix core developer taking his work home.
Specialised hardware (Score:4, Interesting)
Why buy a big processor when the only intensive computational tasks are video en/decoding and games, tasks that can easily be farmed off to other, cheaper units?
Re:Specialised hardware (Score:5, Informative)
Take digital audio. Used to be that CPUs were too pathetic to do even simple kinds of digital audio ops in realtime, so you had to offload everything to dedicated DSPs. Protools did this, you bught all sorts of expensive, specialesed hardware and loded your Mac full of it so it oculd do real time audio effects. Now, why bother? It is much cheaper to do it in software since processors ARE fast enough. Also, if a new kind of effect comes out, or an upgrade, all you have to do is load new software, not buy new hardware.
Also, if you like, you can get DSPs to do a number of computationally intensive thing. As mentioned, the GPU is real common. They take over almost all graphics calculations (including much animation with things like vertex shaders) from the CPU. Another thing along the games line is a good soundcard. Something like an Audigy 2 comes with a DSP that will handle 3d positioning calculations, reflections, occlusions and such. If you want a video en/decoder those are available too. MPEG-2 decoders are pretty cheap, the encoders cost a whole lot more. Of course the en/decoder only works for the video formats it was built for, nothing else. You can also get processors to help with things like disk operations, high end SCSI and IDE RAID cards have their own processor on board to take care of all those calculations.
Re:Specialised hardware (Score:2)
Re:Specialised hardware (Score:4, Insightful)
However the real point of a sound DSP is to free up more CPU for other calculations. A game with lots of 3d sounds can easily use up a non-trivial amount of CPU time, even on a P4/AthlonXP class CPU. So no, it isn't critical like a GPU, it can be handled in software, but it does help.
Re:Specialised hardware (Score:2)
Re:Specialised hardware (Score:2)
Re:Specialised hardware (Score:3)
What you are referring to is an ASIC, an Application Specific Integrated Circuit. These realy stomp through data like nothing, are cheap to build but unfortunately expensive t
Re:Specialised hardware (Score:2)
I do often wonder why specialised hardware is not used more often for tasks that are often performed. I recall that the Mac used to have some add-on cards that spead some Photoshop operations up to modern levels 3-4 years ago.
Why buy a big processor when the only intensive computational tasks are video en/decoding and games, tasks that can easily be farmed off to other, cheaper units?
For the same reason many people buy a $1,000 computer, rather than a $100 VCR, $250 DVD player, two $250 gaming conso
Re:Specialised hardware (Score:3, Insightful)
We have found, however, that as long as your system is a one-off creation, or to be used in a limited number of instantiations, it typically doe
Graphics processor vs. general-purpose CPU (Score:5, Insightful)
Because GPUs are specialized processors. They are only good at a couple of things: moving data in a particular format around quickly, and linear algebra. It is possible to do general-purpose calculations on a GPU, but that's not what it is good at, so you'd be wasting your time.
This is akin to asking why you shouldn't go see a veterinarian when you get sick. Because veterinarians specialize in animals. Sure, they might be able to treat you, but since their training is with animals you might find their treatments don't help as much as going to see a regular doctor.
Re:Graphics processor vs. general-purpose CPU (Score:5, Funny)
Re:Graphics processor vs. general-purpose CPU (Score:2)
What the article says later is that the GPUs will grow to become more generalized machines; however, this isn't to say that they will become generalized as an x86 cpu; however, they may become more than just Graphics Processor Units and are expanding into specializing in other areas that games require special acceleration.
The good thing is that those 'other areas' are used by things ot
Reason why NOT to drop CPU (Score:3, Insightful)
Math Co-Processor (Score:2)
Re:Math Co-Processor (Score:2)
I know it's nitpicking, but the SX/DX designation doesn't seem to indicate with or without math-co atleast for the 386 series.
Re:Math Co-Processor (Score:2)
Because.... (Score:3, Insightful)
Seriously though, the design we have now is a good one. A strong, general-purpose CPU augmented with a specialized GPU for high-cost operations. Depending on how high the cost is (ie: iDCT for playing DVDs) we may want to start moving the work to the specialized cpu - this has been done with ATI cards for a couple years now.
The difference between a CPU and GPU (Score:5, Informative)
"Normal" code, such as a game engine, compiler, word processor or MP3/DivX encoder does all sorts of different operations, in a different order each time, many which are inherently serial in nature and don't scale well with parallel processing. This type of code is full of branches.
To optimize graphics processing, you can really just throw massively parallel hardware at it. Modern cards do what, 16 pixels/texels per cycle? 4+ pipelines for each stage all doing the EXACT same thing?
Regular code just isn't like that. Because different operations have to happen each time and in each program, you can't optimize the hardware for one specific thing. In serial applications, extra pipelines just go to waste. Also, frequent branch instructions mean that you have to worry about things like branch prediction (which takes up a fair amount of space). When you do have operations that can happen in parallel (such as make -j 4), the different pipelines are doing differnet things.
Take your GeForce GPU and P4 and see which can count to 2 billion faster. In a task like this, where both processors can probably do one add per cycle (no parallelizing in this code), the 2GHz P4 will take one second, and the 500MHz GeForce will take four seconds (assuming it can be programmed for a simple operation like "ADD"). Even if you throw in more instructions but the code cannot be parallelized, the CPU will probably win.
Basically, since you can't target one specific application, a general purpose processor will always be slower at some things - but can do a much wider range of things. Heck, up until recently, "GPUs" were dumb and couldn't be programmed by users at all. I haven't looked at what operations you can do now, but IIRC you are still limited to code with at most 2000 instructions or so.
Re:The difference between a CPU and GPU (Score:4, Insightful)
Take your 486SX without a coprocessor... you can get an FPU (coprocessor) which does floating point operations MUCH faster than you can emulate them. However, you can't just use an FPU and ditch the 486, since the FPU can't do anything but floating point ops - it can't boot MS-DOS... it can't run Windows 3.1... it can't fetch values from memory... it can't even add 1+1 precisely!
Seti@Home (Score:2, Interesting)
Re:Seti@Home (Score:3, Insightful)
The explanation that the developers sometimes give against tweaking and opensourcing is scientific integrity. Graphics cards are not designed for exact replication of processes, and they often trade precision for speed. Still, I believe tha
GPU Performance Myths (Score:5, Informative)
If you'd really like the answer to this question, try programming anything on the GPU and you'll understand. It's hell to do half this stuff. GPUs are highly specialized and make very specific tradeoffs in favor of graphics processing. Of course, some operations, specifically those that can be modeled using cellular automata, map well to this set of constraints. Others, such as ray-tracing can be shoe-horned in, but if you were to try to write a word processor on the GPU, it'd essentially be impossible. The GPU allows you to do massively parallel computations, but penalizes you heavilly for things such as loops of variable length or reading memory back from the card outside of the once-per-cycle frame update, and the price of interrupting computation is prohibitive. Clearing the graphics pipeline can take a long, long time.
Furthermore, while there have been a few papers published claiming the orders of magnitude increase in speed in these sorts of computations, none actually demonstrate this sort of speed-up. Everyone's speculating, but when it comes to it, results are lacking.
b.c
Re: GPU Performance Myths (Score:5, Insightful)
> The GPU allows you to do massively parallel computations, but penalizes you heavilly for things such as loops of variable length or reading memory back from the card outside of the once-per-cycle frame update, and the price of interrupting computation is prohibitive. Clearing the graphics pipeline can take a long, long time.
> Furthermore, while there have been a few papers published claiming the orders of magnitude increase in speed in these sorts of computations, none actually demonstrate this sort of speed-up. Everyone's speculating, but when it comes to it, results are lacking.
I looked in to using the GPU for vector * matrix multiplications over my Christmas vacation (yep, a Geek), and everywhere I turned I found people saying that whatever you gained in the number crunching you lost in the latency of sending your numbers to the GPU and reading them back when done. In the end I didn't even bother running an experiment on it.
But maybe conventional wisdom was wrong; elsewhere in the talkbacks I see links to a couple of
SGI did this (very) long ago (Score:3, Interesting)
At least for the demos...
Integrated GPU/CPU (Score:5, Insightful)
You mean like: this [ati.com]?
Now, that press release was about two years old, and you can bet that ATI has advanced beyond that point (though I can't provide details).
Also, while not integrating a serious 3D graphics GPU, there's no reason that this can't be done -- except one -- and the same reason that a powerful CPU isn't integrated: heat dissipation.
But, for a "media processor", it sure is sweet.
Disclaimer - I work for ATI (Score:2)
Still, I don't think anyone is going to get upset over a link to an existing press release.
Didn't read the article, but it doesn't matter. (Score:5, Interesting)
Look, for the same price of a $400 graphics engine you can get yourself a dual CPU machine, a cheap graphic card with AGP, and do it in "software" with about the same efficiency, if you know what you're doing.
Because the extra CPU isn't inheritly multi-core like most modern GPUs, you need to compensate with a higher clock speed, and use whatever multimedia instructions it has to the fullest extent (ie altivec, mmx2, etc.)
But of course, the GPU is better suited to the actual drudge work of getting your screen to light up. If there's stuff to be computed and forgotten by it (i.e. particle physics), its probably better left decoupled to exploit parallism in that abstraction.
As you get to a limit in computational efficiency, you start adding on DSPs, and this is where FPGAs and grid computing start looking interesting.
So it shouldn't be considered suprising that these companies will say that; they can see that trend and they want a piece of that aux. processor/FPGA action. The nForce is a step in the right direction. They don't want to be relegated to just making graphic accelerators when they have the unique position to make pluggable accelerators for anything.
But to plan on packaging an FPGA designed for game augmentation and calling it a uber-cool GPU is just a marketing trick. This technology is becoming commercial viable, it seems.
Re: Huh? GPUs aren't FPGAs (Score:2)
Call it NV50, and Anandtech will call it a GPU. Then there's no argument.
Wheel of reincarnation (Score:3, Insightful)
You wrote: My question - If these cards are getting so powerful at computations then why do we need a Intel/AMD processor at all? Just make a graphics card with more transistors and drop the traditional processor...
Congratulations! You have just reinvented Ivan Sutherlands Wheel of reincarnation which is exactly about this: Normal CPU:s are enhanced with specific functions to provide acceleration for a common task, the enhancments are getting so big that farming them out into a separate chip/module seems like a good idea. The separate thingy grows in complexity as more flexilibility and programmability is needed. Finally you end up with a new CPU. And then someone says.... You get the idea.
Here is a good take [cap-lore.com] on Ivan Sutherlands story. And here [stanford.edu] is Myers and Sutherlands original paper.
Read, think and learn.
Re:Wheel of reincarnation (Score:2)
The horror! (Score:5, Funny)
Unfortunately, the researchers have all inexplicably been rendered deaf.
You are really an idiot. (Score:2, Troll)
Do you understand why these so-called GPUs are so fast at doing graphics and mathetmatics geared towards graphics? Because they are Graphics Processing Units. They are not general computers. They are designed to do one thing and one thing really well: the math for 3D graphics. They would be terribly slow at general
The future is bit granular. (Score:2, Interesting)
Re:The future is bit granular. (Score:2)
The two historical examples of this that I am aware of are the
AMD 29k bit-slice microprocessor series, and the Connection
Machine model 1. The CM-1 was unique in that it had
commercial sales at scales up to 64K bits wide and used a
1-bit wide distributed memory. The reasons these highly
customizable architectures did not persist are twofold: Economy
of scale favored the standardized microprocessor (on the hardware
end) and they wouldn't run pre-existing software and it was
hard to find
Re:The future is bit granular. (Score:2)
You're making the "elegance and simplicity is well worth it"-error of science. Basically it means that in science elegance and simplicity are not unconditionally worth it.
Doesn't matter how good it could be made. (Score:2)
No matter how good you made the card, assuming it would be a dual video card/processor, you would be stuck in a situation like if you were to buy a motherboard with tons of onboard stuff on it, like a video card, for example. No matter how much ram you put in it, the video card's power will never be quite as good as if you were to buy a separate video card of comparable power and plug it into the motherboard. The same wo
GPU can be a universal turing machine (Score:2, Insightful)
It is done via fragment (pixel) programs (for the arithmetic instructions) and multiple 'rendering' passes (for program control). Ask Göögle [google.com] if you want to know more about this interesting subject.
Just my 2 cc.
Best regards,
Daniel
Reconfigurable computing (Score:5, Interesting)
There's a lot of work being done on reconfigurable computing, which imagines replacing the CPU, GPU, DSP, soundcard, etc., with a single reconfigurable gate array (like an RAM-FPGA). You'd probably have a small control processor that manages the main array. On this array one could build a CPU (or several) of whatever ISA you needed, and GPU, DSP, whatever functionality was called for by the program(s) you're running at the current moment. Shutdown UnrealTournament 2009 and open Mathlab, and DynamicLinux will wipe out its shader code and vector pipelines, and grow a bunch of FP units instead. Run MAME and it will install appropriate CPUs and other hardware.
In the initial case, this would be controlled statically, a bit like the way a current OS's VM manages physical and virtual memory. Later, specialist "hardware" could be created, compiled, and optimised, based on an examination of how the program actually runs (a bit like a java dynamic compiler). So rather than running SETI-at-home your system would have built a specialist seti-ASIC on its main array. There will be lots of applications where most of the work is done in such a soft ASIC, and only a small proportion is done on a (commensuately puny) soft-CPU.
This all sounds too cool to be true, and at the moment it is. Existing programmable gate hardware is very expensive, of limited size (maybe enough to hold a 386?), runs crazy hot, and doesn't run nearly quickly enough.
CPUs and GPUs are competitors (Score:4, Insightful)
A development this extreme is unlikely. However, what is very real is the fact that GPUs and CPUs are at least partially competitors.
If you are doing a lot of graphics then you the best computer for your money may be with a great graphics card and a so-so CPU. The better and cheaper GPUs Nvidia can make, the smaller the demand for state of the art Pentium's.
But unless there is a revolutionary development somewhere, we will probably see computers with both kinds of processors for a good while.
Tor
I think that should be.. (Score:2)
from 10 to 100 times faster than a Pentium 4 at Scientific computations
Also, Wired had an article on this, with the main gist "NVidia plans to make the CPU obsolote" [wired.com].
NVIDIA: CPU="Co-Processing Unit" (Score:2, Interesting)
Oh great... (Score:2, Funny)
So, now I should start putting high-end graphics cards in my servers? Has Apache been compiled for an nVidia GPU yet? I wonder how well a Geforce FX runs Linux or Windows 2000. I bet the 2.0 Pixel Shader spec helps a lot with database speed.
We don't need no stinkin' CPU! (Score:2)
Exactly. And if the Playstation 2 can do over 6 GFLOP/s, why doesn't Cray just make a cluster of Playstations instead of buying a shitload of Opterons? Really, someone should give these guys a clue...
RMN
~~~
DSP chip (Score:2)
here we go again... (Score:2)
Why do we need a traditional processor? (Score:2)
Eyes are too limiting... (Score:2)
After all, images are merely optical sensory input data. If the bandwidth of the device doubles every year, you should be somewhere in the neighborhood of being able to produce a data stream comparable to a human's normal sensorium.
I'm looking for someone who knows more about this than I do (which shoul
this isn't even a real issue... (Score:2, Interesting)
Raise Your Hand (Score:2)
PS2 supercomputer (Score:2)
How about tiling GPUs ? (Score:3, Insightful)
The same goes for CPUs. The CPU power would increase if more CPU cores could be added on the fly. There was a company called InMOS technology that produced the Transputer chips that were able to perform in a grid: each chip had 4 interconnects for connecting other Transputers to it.
Of course, there are advances in buses, memory etc that all require total upgrades.
I think that the industry has overlooked parallelism as a possible solution to computation-intensive problems. Of course, there is a class of problems that can't be solved in parallel, as one computation step is fed with the results of the previous step...but many tasks can be parallelized: graphics rendering, searching, copying, compression/decompression, etc...anything that has to do with multiple data. It's a wasted opportunity. Instead, companies go for raw power. I guess its more profitable and less technologically challenging...Introducing parallelism in the hardware would require a new bunch of programming languages and techniques to become mainstream, too.
Finally, I would like to say that if quantum computers become a reality, then we will see pretty good reality simulators inside a computer, since their speed would be tremendous, many times the speed of todays top hardware.
Better pixels (Score:3, Insightful)
Re:Good card? ;) (Score:2)
The rule of thumb that I follow is that a video card upgrade is only worth it every 2 generations. If you have a GeForce256, skip the GeForce2 and get a GeForce3.
I currently have a GeForce3, and am going to get the FX 5900 when the price drops down to a sane level, heh.
Re:Good card? ;) (Score:2)
Re:Good card? ;) (Score:2)
Re:CPU's are still neccessary (Score:3, Insightful)
I agree completely that offloading tasks from the CPU is good, look at the
Re:CPU's are still neccessary (Score:2)
Re:Quartz Extreme for other purposes (Score:2)
At the moment, we have a standard way of talking to a CPU (the architecture of it - Windows runs on x86, OSX runs on PPC for example) and there's a standard way of talking to a graphics card (OpenGL for one). For different processes, we'd need standards to talk to those extra processes. As for now, Central Processing (by defintion) and Visualisation are two parts to using a computer that are important to a large enough group of computer users that those standards have eme
Pros and cons of legacy support (Score:2)
Re:Maybe not, but.. (Score:2)
Perhaps you mean something along the lines of SMP?
Moving on, why can't we squeeze two motherboards and CPUs into slightly larger case to run Mosix? Redundant, multi-processing, limited by the speed of the data
Re:Maybe not, but.. (Score:2)
Re:Maybe not, but.. (Score:2)
No, it is not a solution to the fact that you can't make a GPU act like a CPU, it is a solution for those who need redundancy, sometimes want to emphasise graphics (allthought less than a full blown GPU might offer) and sometimes wants to emphasise raw processing power, and who might want the flexibility such a solutin might give them.
And it isn't that different from a quad-pentium system, apart from the fact that I suggest reassigning the CPUs according to what the system needs - ie dedicating one, two o
Re:Console Love (Score:2)
Mandelbrot.ps (Score:4, Interesting)