Cell Architecture Explained 570
IdiotOnMyLeft writes "OSNews features an article written by Nicholas Blachford about the new processor developed by IBM and Sony for their Playstation 3 console. The article goes deep inside the Cell architecture and describes why it is a revolutionary step forwards in technology and until now, the most serious threat to x86. '5 dual core Opterons directly connected via HyperTransport should be able to achieve a similar level of performance in stream processing - as a single Cell. The PlayStation 3 is expected to have have 4 Cells.'"
Seeing is believing (Score:3, Informative)
It's not like we haven't heard it before. It usually turns out to be halfish-truish for some restricted subset of operations in a theoretical setting, you know where you discount busses, memory and latencies.
Re:Seeing is believing (Score:5, Interesting)
No, this sort of architecture is a general trend towards paralellization. It is smart, and it is known to work, and I would expect some bright Sparc wise people to chime in and say "u-huh" and some SGI wise people to chime in and say "I've seen some of this before." The OS people [dragonflybsd.org] are starting to move things in this direction, and I've heard that Darwin has had the asynchronous messaging type threading model for a while (RTFA: the article explicitly mentions Tiger's GPU leveraging techniques). If you have the head for it, try reading up on NUMA and compare that with SMP.
The math is simple. CPUs are CPUs, and anyone can make one that is the same speed as the competition, and if they do it second they can do it cheaper. The guy that can make 20 CPUs work like one CPU that does 20 times the work in a given time will win because he can always just throw more hardware at the problem. The SMP guys have to go back to the drawing board. In this case, the only way to beat-em is to join-em. Maybe doing the specific "Cell" computing design isn't it, but the ol' PC is dead. If these things start hitting the commodity price-points.
That's a big, fat IF. So, don't bet on it (yet), but it's even worse to ignore it.
Re:Seeing is believing (Score:3, Interesting)
FTFA
Caches work by storing part of the memory the processor is working on, if you are working on a 1MB piece of data it is likely only a small fraction of this (perhaps a few hundred bytes) will be present in cache, there are kinds of cache design which can store more or even all the data but these are not used as they are too expensive or too slow.
APU local memory - no cache
To solve the complexity associated with cache design
Re:Seeing is believing (Score:3, Interesting)
Caching: CPU > cache (1 or 2) > main memory > HD
and then implement another system that is *completely different*
Non-Caching: APU > local memory > main memory
Now first off, how is it different?
Let me take a vaguely educated guess.
Currently the cache managers in x86 CPUs "predict" what part of the memory space is needed. This prediction isn't always that good, and efforts to make programs hint to processor what to cache haven't worked good enough (or at leas
Re:Seeing is believing (Score:3, Interesting)
1) It must be programmed differently. Instead of just accessing memory how you want, you must explicitly copy the part of memory you need at the moment. So, if your APU is acting as a vertex shader, you need to copy the shader code into the LS before you start processing. Essentially, the LS can give you the time savings of a cache, but you have to manage it yourself to get the benefits.
2) Since the LS isn't managed by the hardware, it doesn't need
Seeing DRM in Cells? (Score:3, Insightful)
Much of the curent discussion has been on how to program and coordinate all the little digital signal processors (DSPs - aka DPUs). I think these questions are moot because the envisioned DRM (digital rights management) will make the "cell data" and "cell programs" uninspectable. Even by the the on-chip PUs (processor unit - something like a power-Mac ru
Human skin cells (Score:2, Funny)
This is beginning to sound more like (Score:3, Funny)
Re:This is beginning to sound more like (Score:4, Funny)
The 45 episode saga in which:
Bill Gates becomes a cyborg and summons the forces of evil.
A new Cell is constructed out of unsold Itaniums (Not to be confused with the Cell built by Sony, which is a friendly robot that is found out to be good. ( Until he is found out to be evil when the heroes notice he is under the control of the cyborg Bill Gates who has been behind the charade the entire time) and challenges the world to a rematch of earth shattering proportions
Second string characters have meaningless conversations that take up entire episodes
There is hilarious comic relief from common citizens in various towns as their cities crumble around them
Krillin dies
The dragon is summoned
Goku gets a haircut
Re:This is beginning to sound more like (Score:3, Funny)
Inuyasha? KAGOME!
if it sounds too good to be true.. (Score:5, Insightful)
was the ps2 the supercomputer it was said to be...?
the author goes on to suggest that cell workstations would smoke x86 counterparts.. but says at the same time that there probably wont be that many of them.
wtf? though in-between the lines you can read at the end that he also thinks a single g5-cpu workstation would 'smoke' x86's...
Re:if it sounds too good to be true.. (Score:2, Interesting)
I don't remember Sony making any big statements about the Emotion Engine being a supercomputer. What I do remember, is that when they released the clock speed of their processor, people knew the relative power of the PS2. From what I see of the Cell architecture, I can guarantee that the Cell is much more powerful than any AMD and Intel processors.
It seems like you didn't read much into the technical aspect of the Cell architecture presented in the l
Re:if it sounds too good to be true.. (Score:2)
In reality though the US definition of a super computer was out of date, and by the time the PS2 was released the top end x86 CPUs were also technically super computers.
Re:if it sounds too good to be true.. (Score:2, Funny)
Jaysyn
Re:if it sounds too good to be true.. (Score:4, Informative)
I suspect that the main reason there was never an Emotion Engine based cluster product was because the high performance market is tiny, especially compared to the console market, and Sony was already having trouble meeting demand with their exotic chipset when it first came out.
Anyways, I think the guy does go overboard about this new architecture. It probably will be a lot faster than PCs at certain tasks but you can only fit so many transistors in a chip. The cell stuff is cool though, it seems to fit a lot better with what most computers spend their time processing unless you're doing a lot of compiling or database operations.
fanboy article (Score:4, Insightful)
I don't see anything in the cell arcitecure that would fundamentally make the same number of transistors at the same speed operate faster. I see lots of bottlenecks, IO overhead and wastet transistors. If there is some magical powerful thing that these can do SO much better than the current X86 instruction set and hardware, guess what, it'll adapt.
x86 adapted to RISC being "wildly faster" and, in the end, became better RISC than RISC was by translating more memory efficent X86 instruction onto a RISC backend. It adapted to SIMD (Single Instruciton, Multiple Data) efficiency issues by adding MMX/MMX2/SSD/SSD2 and 3DNow. It adapted to the reality of 64 bit address space and the need for more registers with the new X64 instruction set extensions. AMD and Intel could add cell hardware and instructions too if they offered anything special, which I highly doubt they will.
Re:if it sounds too good to be true.. (Score:3, Interesting)
What always confused me (Score:5, Insightful)
Seeing how RAM is increasingly becoming cheaper, is it possible that new systems like the PlayStation3 might be able to provide RAM that actually allows games to reach their potential along with this new cell hardware?
Re:What always confused me (Score:2, Informative)
But it plays all the popular games of today's PC with little to no lag. Where as you need a very high end PC to play the same game!
This is mostly due to the fact that the architecture with the video is more direct, than it is on a PC. There's no AGP bus, or any bottle neck to access video ram. It's more direct which is probably why an Xbox
Re:What always confused me (Score:2, Insightful)
Actually, it is running at 720x576 (a PAL XBOX that is) but I don't see why this is so funny, because that's just the resolution of a PAL TV. Having a higher framebuffer resolution would probably only decrease the output quality when displayed on a normal television.
That said, if you have an HDTV, the XBOX can output at 1920x1080i...
Your sig is mine
Re:What always confused me (Score:2, Interesting)
Re:What always confused me (Score:3, Insightful)
I would *love* to know how you had a P3 and a Geforce 3 in 1998. I had to make do with a brand-new top-of-the-range PII-350 with a Matrox Mystique and SLI-ed Voodoo 2s. Did you pull the Geforce through one of these parallel universe wormholes?
Re:What always confused me (Score:3, Informative)
W
Re:What always confused me (Score:2)
Less main memory on the XBox also mean smaller levels or less content on each level. Unfortunately the cost of creating different size or content for two platforms is so expensive time-wise most developers just choose to design for the XBox and let PC user
Re:What always confused me (Score:3, Informative)
Anyway, hardware really didn't affect the game like some people pretend it did. The streamlined gameplay was because Harvey Smith and his team wanted it that way (the Xbox has certainly seen plenty of mo
I'll believe it when I see it (Score:5, Insightful)
On paper an Emotion Engine was supposed to destroy everything, but achieving maximum throughput was difficult and other contraints such as I/O and memory hampered performance. Programmers had to learn a very different way of programming to make full use of the processor and it's two vector units.
A Cell might be a killer chip on paper, but real-world hardware with I/O latency and memory contraints will bring things down to a more reasonable level. Don't forget that multiprocessor programming is *hard*.
Hopefully, developing software for Cell chips will be easier then the early days of the PS2, Sony has already said as much a few months ago.
Re:I'll believe it when I see it (Score:3, Informative)
The Sega Saturn used dual processors, and was nearly a clone of top-end Sega arcade systems. Unfortunately, it was terribly hard to program, so only in-house Sega titles were developed to utilize the full potential of the device, such as Virtua Fighter, while other titles were only using half the perform
Can this be taken seriously? (Score:5, Insightful)
"GPUs will provide the only viable competition to the Cell but even then for a number of reasons I don't think they will be able to catch the Cell."
Did this guy forget that NVidia is designing the GPU for PS3? If Cell is so almighty, why does Sony uses NVidia GPU instead of using more Cells for graphic prosessing?
"There is another reason I don't think Nvidia or ATI will be able to match the Cell's performance anytime soon."
Of course, Cell based products won't be available anytime soon either. According to the current rumors, PS3 will be available in Japan in Spring 2006 and elsewhere in Autumn 2006. One and half years equals a generation in the GPU world...
I love this kind of articles where some future products are compared against current ones and declared as a clear winners...
Cost (Score:2)
>why does Sony uses NVidia GPU instead of using more Cells for graphic prosessing?
One possible reason is the cost. When you can save a large area in a silicon die by using a specilized DSP, why do you waste some processing power in a CPU? nVIDIA can provide a reasonablly efficient solutions such as texture units and pipes toward more specific types of processing. Cost is everything, when you manufacture millions of
Re:Can this be taken seriously? (Score:2, Informative)
But what struck me most is that you seemed to have missed the whole point the authors seeks to make. Yes, Moore's law will doub
not a new architecture, and it's going to be tough (Score:4, Insightful)
In the long term, it's nice that companies are exploring these kinds of architectures. It's not nice that they are trying to monopolize what are pretty straightforward architectural choices with patents. This may be a new CPU, but there is little that is new about having a bunch of fast processors interconnected via a reconfigurable network; these just happen to be on the same chip.
Cool! (Score:5, Funny)
Well, perhaps "cool!" is not the correct response...
Re:Cool! (Score:2)
On a serious note, The cell misses the point, we don't NEED any more CPU power, what we need is existing levels of power but without the need for excessive cooling and the fan noise that goes with it (I can hardly type this over the noise of my P4 3.0) !.
Fast, Silent and power efficent is what's needed next.
Re:Cool! (Score:3, Insightful)
85 Celcius operation with heat sink
Well, perhaps "cool!" is not the correct response...
It says with a heat sink only. Not with a fan!
The last chip that worked without a fan was the 486DX33 and
486DX40(I'm talking mainstream desktop PC hardware, not mobile solutions). You could probably stick a fan and get it down to
40 degress, while a Pentium 560 will produce liquid plasma and/or a fusion reaction if operated without a fan.
P.
Re:Cool! (Score:2)
Re:Cool! (Score:2)
I ran a Pentium-120 for a year without any kind of cooling - not even a heatsink. It worked perfectly.
Re:Cool! (Score:2)
Is it just me? (Score:3, Interesting)
http://www.blachford.info/computer/Cells/Cell_Dis
Make it look a little more like a HAL than a Cell?
Re:Is it just me? (Score:2)
Re:Is it just me? (Score:2)
Cells everywhere! (Score:2, Funny)
* 4.6 GHz
* 1.3v
* 85 Celcius operation with heat sink
In toasters.. ovens..
Compiler technology (Score:5, Informative)
The potential of parallel architectures has never been in doubt since the early days of the Cray monsters - but how to compile code to use all the features efficiently has.
I don't believe that we see the full advantage of these types of architecture exploited without some similar break-through in software tools.
Mind you the hardware rocks...
Re:Compiler technology (Score:2, Interesting)
Program in a language that is referentially transparent [wikipedia.org].
...once you can assume that any function is able to be concurrently executed all you have to solve is the communication between processors/storage. The latency of current networking technologies makes this unpractical for general tasks, but this is less of a problem with a low-latency internal bus.
Time to drag those Haskell textbooks out of the closet and dust them off. ;)
Compiler technology - OpenMP (Score:3, Informative)
One question which was not addressed fully in the article was how do you compile/test programs for this thing. The answer is OpenMP [openmp.org]. OpenMP is mulithreading API wich can hide parallelization from the user almoste completly. It's embarassingly easy to use - only one line of code is enouth to parallelize a loop. All threads creation/synchronisation remain hidden from user. It's extremly efficient too - I was never able to achime the same level of performance if duing multithreading myself.
Re:Wow (Score:2)
Cray type architectures are basically vector parallel processors. In this case it *is* possible to get compilers to pull vector type operations out of tight serial loops (gee this looks like a vector multiply etc.).
Even then a programmer either has to:
a) Use well defined libraries (vector operations etc.)
b) Know something about the hardware t
Re:Wow (Score:3)
Now you either need:
a) A really intelligent compiler
or
b) A really intelligent programmer
or
(c) A language [kent.ac.uk] and corresponding underlying concurrency theory [fmnet.info] that allows you to design and analyze complex interacting multithreaded systems with ease.
Re:Compiler technology (Score:2)
There is no way you can write high performance software for the PS2 without knowing about the internal architecture specifics.
Serial and Parallel (Score:2, Insightful)
Re:Serial and Parallel (Score:2)
next please (Score:5, Insightful)
This is from the company that said the Playstation 2 would have Toy Story quality graphics, and be able to render FF8 quality FMVs in real time (thus making FMVs no longer required). It was essentially that bullshit hype that killed the Dreamcast... so yeah, now they're at it again.
Maybe I'll be proven wrong, but I doubt their system will be able to do anywhere near what they say it can in practical application.
Re:next please (Score:2)
Actually, I keep hearing about SIMD units on it, but no real 3D specific hardware on it has come to my attention. As they say, WTF?
Re:next please (Score:3, Interesting)
Re:next please (Score:4, Insightful)
(Stupid Sony, I've had my PS/2 for about a year now and I still notice it almost every time I play. Can't believe how unbelievably stupid they were not to include it. That one change, which by computer graphics standards is dirt cheap, would have massively improved its graphics. Anti-aliasing, on the other hand, is expensive to do right, so while I expect it on this next generation, at least while running in NTSC or PAL, I wouldn't have expected it in the PS/2 era. Though some managed, I think....)
After that, I don't trust them any farther than I can throw them. The PS/2's graphics subsystem wasn't an Eighth Wonder of the World, it was an incompetent disgrace. Fortunately most of their fanboys are so stuck up the ass with Sony that it took them years to notice, instead of it jumping out at them in 5 seconds.
I have it for the game selection, and I like the games, I like the controller, I like the case, etc... but the graphics are far, far worse than what they should have been. You have to reach back for years and years to find anything else that didn't do mipmapping.
(I've also played the Dreamcast some more lately. It definately pumps out fewer polygons, but equally definately, they are higher quality polygons, and the fact that the Dreamcast clearly has mip-mapping is no small part of that. The PS/2 was a step forward in some ways, but a big step back from the DC in others.)
Re:next please (Score:3, Insightful)
How could they possibly do this cheap? (Score:5, Insightful)
I've been reading PR about the Cell for years, and nothing I've ever read has seemed even remotely plausible. Is there any objective information that even comes close to substantiating any of these claims?
Re:How could they possibly do this cheap? (Score:2)
I'm thinking there's no way they can afford not to.
MS wants to take out sony as the leader in the game market. Their tactic is to take cheap PC hardware, snap up hot games and make them exclusive by buying out the game companies. Throw a lot of advertising on it and vio'la. Anyone can see the writing on the wall.
Sony has to come out with something that will smoke whatever the XBox2 is going to be. Not just your regular smok
Re:How could they possibly do this cheap? (Score:3, Informative)
Sony is going with Cell from IBM and an nVidia graphics chipset. So I don't see a huge difference. My guess is that both consoles will have extremely similar performance and this next generation of consoles will be the most boring ever -- lots of multi-platform games that look identical.
STI (Score:3, Funny)
Reason why IBM sold PC unit to China? (Score:2, Interesting)
multicore, stream-processing, vector-oriented BS (Score:5, Insightful)
No cache for CPUs? A breakthrough? Hello! Both PSone and PS2 have the so-called scratchpad, which is what the Cell seems to have: a cache which has to be managed explicitly by the programmer. Breaking news: This is a royal pain in the ass. And calculating bandwidth when reading from this tiny scratchpads makes about as much sense as calculating the speed at which a x86 processor can execute MOV EAX, EBX.
Magically "the OS solves everything", and, in an obvious attempt to automatically get OSS-crowd support (is that "slashdot-trolling" or "slashdot-baiting"?) the triumph of Linux is predicted, because it's portable. Good luck getting the Linux kernel and GCC compiled, let alone running well on a massively parallel array of tiny CPUs without cache.
Obscene (Score:2)
Unfair comparison (Score:3, Interesting)
From TFA: "Existing GPUs can provide massive processing power when programmed properly, the difference is the Cell will be cheaper and several times faster."
Its supposed to do 250GFlops when? 2 years from now? Apparently the Geforce 6800 Ultra will do 40GFlops and thats today.... extrapolate with some doubling here and there it seems a lot more reasonable.
So the big thing is that it comes down to programming. It came up a few times in the article "Doing this will make it faster but will make for one hell of a time for the programmers" It may have a huge potential but may take a while to get everything efficiently as Sony would like. Reminds me of when the GF3 first came out and was beaten by the GF2U in some tests. IIRC it took a while for games to come out that took advantage of its programability. It'll be interesting to see how well the programmers can fair between now and Cell's release.
It needs some serious software to work! (Score:4, Interesting)
Also, let's not forget that developers will be unable to keep up, unless some highly sophisticated libraries and languages are made available. I really don't expect the majority of developers to be able to cope with massive parallelism from the beggining (not just 2x SMP or hyperthreading, this needs a totally different mindset).
To sum this up: the hardware will deliver, but the software is a critical unknown in the equation. I have faith in IBM
P.
Locked Up (Score:2, Interesting)
Fortunately, cell reading meant I hardly noticed the claim that hardware would compete with the x86 because, unlike the x86, cell computers need all their software written for the specific hardware.
I like how "hardware-specific" becomes "OS-independent". Great I can plug my H
Cell Architecture (Score:2, Funny)
Nicholas Blachford is an idiot. Please don't read. (Score:5, Funny)
http://www.blachford.info/quantum/gravity.html
Also, look at the nose pictures of him
http://www.blachford.info/other/me.html
Seriously, the guy has burned most of his sane braincells.
For serious laugh, read his article series 'building the next generation' from osnews. I really got good laughs from that 4 part series.
Also, it didn't take long to spot a totally idiotic statement from todays slashdotted article:
> Parallel programming is usually complex but in this case the OS will look at the
> resources it has and distribute tasks accordingly, this process does not
> involve re-programming.
Here Nicholas misses the core problem of parallel programming. The program algorithms _always_ have to made parallel. The OS can't do it.
OK, it's theoritically faster than PCs. So? (Score:2)
Secondly, the 4 CELL processors of the PS3 will NOT give it a graphical edge over the PC. PS3 games will not be as impressive as the PC ones. The reason is that graphics will be handled by the NVIDIA GPU, not by the CELLs. But the PS3 will be stuck with a specific NVIDIA model, while PCs will be upgraded
Re:OK, it's theoritically faster than PCs. So? (Score:3, Insightful)
Secondly: anyone that buys a PC to play games on has more money than sense and is quickly parted from the latter.
TWW
x86, Apple etc Vs Cell my arse (Score:3, Interesting)
And this story is no different.
As many have noted, Sony did exactly this kind of hyping the last time around when the PS2, with its emotion engine, was supposed to be the future of all things computing. As everyone knows, the PS2 was a real pain to code for, and the actual performance was not better than the PC's of the day. The Cell will undoubtedly suffer from the same problems when it comes to coding real applications. Concurrency and parrallelism do not an easier coding experience make.
I have no doubt that this thing will be good, but I absolutely doubt that it will have much or any effect on the x86 world of computing. The G4 processor, when it came out with the Altivec SIMD processsor, which was apparently better than SSE at the time didn't turn Apple into the next Microsoft overnight either, did it?
So, I expect that the x86 world will continue to thrive and that Apple will stick some of these Cell processors, having as they do a PPC 970, aka G5, in their core, in some of their machines and will make the usual wild RDF claims about how hot it is while it will be used by only a small fraction of actual Mac developers in reality, the Mac having to maintain backward compatibility only slightly less then the x86 world does.
In other words, it'll be business as usual.
Re:What's that? Microsoft isn't supporting it? (Score:2)
I don't know if I can take this article seriously. A games console due out within the next year or two is going to be as powerful as 20 of our current top of the range chips? I'm not buying it.
Re:What's that? Microsoft isn't supporting it? (Score:2, Insightful)
Re:What's that? Microsoft isn't supporting it? (Score:2, Insightful)
Re:What's that? Microsoft isn't supporting it? (Score:5, Insightful)
Back to the article. The guy seems to understand hardware, but he does not understand shit about software. Once he got past the first 3 parts he started babbling. Linux on cell, so on, so fourth. If he just read his previous parts he should have hit himself on the head. The only type of linux this can run is mcLinux. There is no memory protection as such. So no Linux, no Windows past 2000, no MacOS past X, so on so fourth.
Similarly, it is all nice and well about cell software beasties making herds by themselves and cooperating on a task. I am going to be a spoilsport and ask a nasty question: Err.. What about a security model? Memory protection? Privilege model for communications? So on so fourth...
To continue on this, the power of a modern general purpose OS is the task switching. How long does it take to load and store the context of the vector processing units? Doing so requires moving their dedicated memory to main memory. This will take ages.
Overall, this is a design similar to Cray 1 initial design. Cray initial design smashed the IBM, DEC (and lesser fish) monopoly on big computing iron to bits. Unfortunately the next thing the people buying the Cray asked for was "can we share this resource between two people?". The answer was provided eventually, but by the time Cray could do all the nifty time sharing and memory management tricks necessary to do this its advantage was no longer phenomenal. And all people who could use Crays for single tasks with manual scheduling actually continued to use it that way. But it did not even dent the general purpose big iron market.
Re: (Score:2, Informative)
Re:What's that? Microsoft isn't supporting it? (Score:2, Insightful)
Re:What's that? Microsoft isn't supporting it? (Score:5, Insightful)
This part I agree with. His statements regarding abstraction are just flat out incorrect. Is this going to be programmed in assembly only? I think not...and if not there is significant abstraction involved. The thing that's closest to his point is that multiple *layers* of abstraction tend to add significant overhead. That doesn't mean that program-level abstractions do.
Once he got past the first 3 parts he started babbling. Linux on cell, so on, so fourth. If he just read his previous parts he should have hit himself on the head. The only type of linux this can run is mcLinux. There is no memory protection as such. So no Linux, no Windows past 2000, no MacOS past X, so on so fourth.
There is memory protection if the PU is in fact "something like a G5". IBM would have to be insane not include a MMU, and it has already stated that it's going to build workstations based on the Cell architecture.
All in all, interesting stuff...we'll see how it plays out. :-)
To continue on this, the power of a modern general purpose OS is the task switching. How long does it take to load and store the context of the vector processing units? Doing so requires moving their dedicated memory to main memory. This will take ages.
This, of course, depends on how many cells are in the box (with 8 vector units per cell) and how many tasks need vector units. The main purpose of the vector units in an interactive workstation will be multimedia processing. How many multimedia applications can you view at once? For me, the answer is one. The vector units may be useful for other things like engineering simulation and pattern matching, but once again how many different tasks using those features will be running at once? Plus if the processors are cheap enough to put 4 in a Playstation, one hopes the workstations will have 8 to 32 of them.
Overall, this is a design similar to Cray 1 initial design. Cray initial design smashed the IBM, DEC (and lesser fish) monopoly on big computing iron to bits. Unfortunately the next thing the people buying the Cray asked for was "can we share this resource between two people?". The answer was provided eventually, but by the time Cray could do all the nifty time sharing and memory management tricks necessary to do this its advantage was no longer phenomenal. And all people who could use Crays for single tasks with manual scheduling actually continued to use it that way. But it did not even dent the general purpose big iron market.
Two points. First, this is based on an already successful processor - the Power series. It already multitasks :-) and is used in a wide range of applications. Second, this will be a low-cost part. Crays were a super high-end system, which cost millions of dollars. Your analogy doesn't work.
Re:What's that? Microsoft isn't supporting it? (Score:2)
If only Cell could do what is reguired of a desktop processor... Ie running tens of processes with total hundreds of threads (This WinXP PC currently has 61 Processes and 523 threads running, current memory usage at half a gigabyte) at the same time, with seamless virtual memory with disk
Re:What's that? Microsoft isn't supporting it? (Score:5, Insightful)
And besides, this isn't about "Office" style apps. Its about games, and more importantly: its about home media centers. I think the Windows MCE is going to have its rear-end handed to it by the PS3.
When you consider that a cell-based PS3 could have a computational power of *several times* a 3 GHz Pentium...
You have to ask, what's more likely: that Intel can get around IBM/Toshiba patents in time for Windows to conquer the living room with a faster box? (That's if they can even build a secure, stable OS with a decent UI). Or that Sony, now armed with the worlds fastest consumer-computing platform, an enormous user base and years of TiVO experience, will own the living room media center market.
If I had to bet on who builds a better media-center PC
Re:What's that? Microsoft isn't supporting it? (Score:2)
Re:What's that? Microsoft isn't supporting it? (Score:2)
Re:What's that? Microsoft isn't supporting it? (Score:2)
My Mythbox is unencumbered by proprietary crap. Can you say the same for your PS3?
Re:What's that? Microsoft isn't supporting it? (Score:2)
IBM knows this....Sony knows this....Apple knows this. Microsoft knows it as well, thus the lack of steam behind Longhorn.
Intel?
Re:What's that? Microsoft isn't supporting it? (Score:3, Insightful)
Pft. People have been saying this every time a new console generation is coming. When the upcoming Playstation 2 was hyped, some people were claiming it would easily emulate a PC at many times the speed of an x86. When it came, people couldn't take full advantage of the hardware. When they could some years later, PC hardware had surpassed it. Besides, p
Re:What's that? Microsoft isn't supporting it? (Score:2)
Re:What's that? Microsoft isn't supporting it? (Score:2)
the article makes so outrageous claims of cell's powers that it makes LITTLE difference if office apps run on it or not. you don't need office apps on your supercomputer.
Is that really that revolutionary? (Score:3, Informative)
Re:What's that? Microsoft isn't supporting it? (Score:2, Insightful)
Re:What's that? Microsoft isn't supporting it? (Score:2)
Well, this could very well be the next "most powerful PC in the world" campaign from Apple. :-)
That said, I think one of the killer apps for this could very well be excellent voice recognition. That alone could provide a giant advantage over existing architectures.
Re:What's that? Microsoft isn't supporting it? (Score:2)
Sony and Toshiba may end up owning the living room, while Microsoft etc. fight it out for the desktop. Everyone in the PC industry thinks that the living room will make them more money...
Re:What's that? Microsoft isn't supporting it? (Score:2, Funny)
With great power comes loads of software
Re:Microsoft isn't supporting it? Who Cares? (Score:3, Insightful)
It's not crap; we produced release versions of our graphics software for Windows on x86, PowerPC, MIPS and Alpha at one point. Shipped some, too. We had machines for all four architectures (still have them, in fact, though the Alpha and PowerPC's are mothballed), development tools, an
Re:Microsoft isn't supporting it? Who Cares? (Score:4, Informative)
There are two operating systems Microsoft have developed called Windows. DOS/Windows, the original one, was based on an x86 clone of CP/M that Microsoft bought. The first version, "Windows 1.0", was released in 1985. The last version, called "Windows Me", was released in 2000, IIRC. This OS was always x86-only, originally ran on archaic CPUs without memory protection and never supported full protected memory, symmetric multiprocessing or other (now) basic OS features.
The second OS developed by Microsoft that's marketed as Windows is Windows NT (now just called "Windows"). It was started in 1988, and never had any relation to DOS/Windows, except insofar as it can (to some extent) emulate it for compatibility reasons (including an x86 emulator on hardware that can't natively execute x86 code). Windows NT was developed on the MIPS platform, not the x86. The original plan had been to use the Intel i860 (an LIW architecture completely different from the x86) as the development platform, but the i860 hardware never met its promise, so MIPS was chosen instead.
The first version of Windows NT was released in 1993, and called "Windows NT 3.1" (3.1 was used for marketing reasons, since that was the latest version of DOS/Windows at the time). Like UNIX, it was mostly written in C, with assembly at the low level to handle hardware dependencies. At its release, Windows NT 3.1 ran on 32-bit MIPS (the development platform) and 32-bit x86 (the first port).
The second version of Windows NT (3.5) was released in 1994, and planned to add 64-bit Alpha (in a semi-crippled, 32-bit mode) and 32-bit PowerPC. However, IBM and Motorola ran into problems with the hardware (in part because of ongoing disagreements with Apple, who wanted to use their own, proprietary platform), so Windows NT 3.5 only added Alpha support. In 1995, after IBM and Motorola had managed to (mostly) sort out their problems (but with Apple declining to follow the IBM/Motorola PReP standard), the PowerPC port of Windows NT was completed, and released as version 3.51. At this point, the OS ran on MIPS, x86, Alpha and PowerPC.
In 1996, the user interface of Windows NT was upgraded to match the user interface of the popular 4.0 release of DOS/Windows (called Windows 95). Windows NT 4.0, which copied the user interface of DOS/Windows 4.0, ran on MIPS, x86, Alpha and PowerPC.
By the late 1990s, as Microsoft continued work on version 5.0 of Windows NT, the market had lost confidence in non-x86 systems for general-purpose PCs (apart from Apple Macs, which didn't follow the PReP standard, so couldn't run OSes ported to it, like AIX and Windows NT). As a result, Microsoft and the vendors of MIPS and PowerPC workstations agreed to cease development and marketing of NT 5.0 for those platforms. Windows NT 5.0 continued to be developed for the x86 and DEC Alpha architectures, into the beta releases.
DEC (which was taken over by Compaq) had continued to have hope for the Alpha as a general-purpose alternative to the x86, but financial difficulties led to the project being abandoned towards the end of the developent cycle for Windows NT 5.0 (marketed as "Windows 2000"). As a result, Windows NT 5.0, completed at the end of 1999, was the first version of NT that only ran on one platform (the x86).
A port of Windows NT 5.0 to the 64-bit Intel Itanium, including 64-bit versions of the Windows APIs (unlike the earlier Alpha port), was released in 2001, but only to select customers.
Windows NT 5.1 (marketed as "Windows XP) was also released in 2001, and again only ran on the x86, apart from another 64-bit limited release for Itanium (in 2002, IIRC).
Windows NT 5.2 (marketed as "Windows Se
Re:Right at the end of the article (Score:2)
Re:err (Score:2)
No longer true (Score:2, Informative)
This begat RISC. A CISC computer had a more complex in struction set, but that barely left it with enough transistors for a couple of general purpose registers. A RISC computer, on the other hand, went by the mantra "never do in software what the compiler can do for you", so it had an over-simplified instructi
Re:No longer true (Score:2)
The _whole_ idea and advantage (for that time) of RISC was that it basically exposed its microcode to the outside world. (Well, not 100% accurate, but as a metaphor it will have to do.) Any instruction required only minimal decoding to directly drive the ALU and the rest of the CPU
Re:No longer true (Score:2)
So at some point CPUs just had enough transistors to return to something that had been used on minis and mainframes long before RISC. RISC had _nothing_ to do with it. If anything, RISC was born from micro-coded designs, not the other way around.
In a way, though, unlike other marketting BS, this one isn't entirely inaccurate. That much I must admit. A RISC CPU isn't that different from a micro-code execution
Re:No longer true (Score:2)
Re:So (Score:2)
Also can it run linux?
I'm still trawling my way through the article, but tihs is a brand new, non-86 architecture. I would imagine there is a lot of work to be done on porting any O/S to work on it properly.
A more pertinant question would be can Linux run on it before Windows does? If there really were a big shift in hardware platforms, which I suppose there must be at some point, then the development speeds of different OS will really make a difference to who dominates.
If this really is a big shi
Re:Speed issues (Score:2)
A good 65 nm. process? =)
IBM is making some amazing process breakthroughs lately...