IBM One-Chip Dual Processor Due Next Year 121
PureFiction writes, "Looks like IBM is going to be scaling processors at the chip-die level. ZDnet has this story about plans for a dual-processor, single-die chip that will operate at upward of 2 gigahertz. It will be called the Power4, will use a .18 micron fab process, and feature on-chip L2 cache (supposedly quite large, though no numbers mentioned), and bus speeds of 500Mhz. I wanna overclock one of these bad boys ..." Better get out your pocketbook, then -- they're slated to power RS/6000 servers rather than consumer PCs, at least for a while. 64 bits, copper interconnects, and plans to move down to a .13 micron fab show that IBM's is thinking long-term. Similar technology may reach your desktop first, though, in products like AMD's Sledgehammer.
Re:What took you all so long ? (Score:1)
Either this shows your fundamental misunderstanding of how SMP is implemented in modern OSes, or W2K's SMP is where Linux 2.0.x's was (Megalock. Only one process in kernel at a time). Judging from the benchmark scores, I'd put a lot of money on option A.
BTW, SMP OSes don't call that "load-balancing" they call it scheduling.
I'm violating an NDA here but... (Score:1)
I saw a VLSI layout of one of these puppies about a year ago. It's one of things where you've got to do a double-take. One side was a mirror image of the other. At first I thought it was an old POWER mirrored for double-redundant mission critical stuff. But then I noticed the linewidth...
Hiding before IBM lawyers get to me...
Re:Power arch at 500 MHz! Correction (Score:1)
-Paul Komarek
Re:Fort Knox, phsa! (Score:1)
Actually, Goldfinger tried to irradiate it and make it unusable, there was simply too much to carry away.
But there's still 140 million ounces of gold at Fort Knox according to the U.S. Mint's web site.
PowerPC is approx 40-50% faster. (Score:1)
Though (Hint Hint IBM/Motorola) it *would* be really nice to have a 1+GHz PPC! A 1GHz PPC would be approx equivalent to a 1.5GHz Intel.
Of course in *real* life, CPU speed is largely irrelevant. RAM and disk performance is much much more important. (It's all about I/O)
Re:What took you all so long ? (Score:1)
--
Re:overclocking (Score:1)
I think few people have the cash to "risc" overclocking such expensive processors.
Re:Not consumer level, thats for certain. (Score:1)
My bad. (Score:1)
Other IBM developments (Score:1)
Should be a good match for these new CPUs.
Re:Not consumer level, thats for certain. (Score:1)
Re:Explanation - Re:What took you all so long ? (Score:1)
could they pull the SX trick? (Score:1)
Power PC G4 (Score:1)
now all i need is a quad core-quad processor G4 and i can take over the world....
Re:overclocking (Score:1)
-BW
Re:Overclock? (Score:1)
The Power4 chip is expected to show up in similar models and, I would expect, in similar price ranges.
Re:interesting details (Score:1)
OK, each die has 2 independant cores, with a shared 4MB L3 and their own memory controler to RAM. They also have two ultra-high speed links to connect to other chips
Each cartridge (IBM's famous ceramic substrate) contains 4 dies, connected to each other via their high speed interconnects and for the power, ground, memory and I/O they have in excess of 2000 BGA 'pins' requiring something like half a ton of force to hold it to the motherboard!
It gets even better :-) The power estimates are around 125W/die. So for the cartridge, you are looking at Half a Kilowatt of power! For a 32 way system, you would have 4KW of power in the processors alone. You still have to add drives, memory, I/O processors and fans. Is that just nifty or what?
Thats no computer... Thats a industrial heating system!
- Mike
Fort Knox, phsa! (Score:1)
Only thing in Fort Knox is a few "Guards" sitting around playing cards. No gold or silver there, was gone long ago... :>
PowerPC WAS approx 40-50% faster. (Score:1)
I mean, this SUCKS. IBM had the first demo silicon at 1.1 GHz almost 2 years ago now. We were promised 1 GHz chips with multiple processors on the core by the AIM consordium's projections 2 years ago by 2000. For servers, they seem to have only slipped 3-6 months, but us desktop PPC users are stuck with x86 envy.
x86 envy! Of all the *#@! archetecures out there, it has to be one of the most arcane, messed up designs that is beating the pants off of everyone. I mean, all the addressing modes, the stack-based FPU, variable-length instructions, and MMX/3DNow/SSE! Have you ever downloaded volume 2 of the "Intel Architecure Software Developer's Manual", the instruction set reference? It's 854 pages! How can companies with so much baggage to work with beat everyone else to the punch on 1 GHz?
Gah. It's bad enough to realize that we'll probably never see the end of that hideous kludge of a design, much less that it's because it's beating the pants off of cleaner designs due to production problems. Makes me nauseous...
For that matter where's our 1 GHz+ Alphas?
IBM Chip Developments (Score:1)
Provided there is always a market for the top end, proprietary (and expensive) closed architectures, then IBM (and others) will continue to generate the profits to research and build leading edge stuff. Will you and I ever have one of these babies on our desktop, or will my server at home running Linux or FreeBSD or whatever thump along at 2+GHZ? Not likely, but you can bet that in the next few years, most consumer level chips will use some of these features.
Moore's Law will last at least a few more years, I expect.
Maclir
Disclaimer: I once worked for IBM in the bad old days.
Re:Starting at 1.1GHz? (Score:1)
1+ gigahertz (2 Ghz is later, it starts at 1.1 in 2001)
Dual processor on one dies
500mhz bus
LARGE L2 cache (I would imagine 2-4mB
64 bit
-------------------------------
x86 CPU's
1+ gigahertz (this year, should be 1.5 to 2 Ghz by the time the Power4 launch)
One processor on die
400mhz bus (not 200 Mhz)
512kB-2mB L2 cache (most probably 1 MB, but the Forster will have up to 4 MB since it is a Willamette for servers)
32 bit
Doesn't look so bad for x86 anymore...
Re:Overclock? (Score:1)
negroponte, I thought, defined bits as the immaterial thingies, and atoms as the material ones.
So shouldn't that be atoms are atoms!?
Or perhaps you were referring to something else
Re:Overclock? (Score:1)
Johan
Re:Performance (Score:1)
They need OS support to switch from one instruction stream to the other. Without that os support, it is up in the air what the IBM multichip would do (I guess I should read the link, eh?), but I imagine that one of cores is a master and is the only one which is activated on startup. It is up to it to run the OS code that initialises the other cores with the appropriate instruction streams.
Re:Explanation - Re:What took you all so long ? (Score:1)
Re:OverClock (Score:1)
I draw the line at my large intestine. Ick.
Re:overclocking (Score:1)
Re:Explanation - Re:What took you all so long ? (Score:1)
That's true. But as your geometry gets smaller, so does your vulnerability to smaller bits of dust and smaller defects / imperfections.
Of course there are other things: we know they've gotten yields up higher and higher per transistor, because they keep packing so many on... I think another poster implied that it must have simply been more cost/performance effective to design more complex single core chips than to try and do multi-cpu chips with the less complex cores...
There must be a technical/trade paper/review out there somewhere which details not only what all the issues, sub-issues, and permutations of issues have been over the past 10 years on this, but what the actual numbers/progress on each item have been, and how the math actually worked out along the way, and thus show what things were actually important in getting the yields high enough to do this. It would be an interesting read.
-NH
Hey zzg: Are you saying that the 486 SX's were chips which had defects/failures in their caches, and thus 'selected' for cache-disabling? (I knew they had their caches disabled, but I don't think I knew/figured that it was a by-product of the yield failures... I think I just figured it was a corporate decision to hobble and sell into the lower cost market...)
I wonder.... (Score:1)
Hmmm, four Crusoe processors on one chip....
How about hundreds of small processors in one die? (Score:1)
I have the impression that implementing a 2n-bit instruction set requires exponentially more die area compared to an n-bit architecture?
Modern Intel 32-bit processors have tens of millions of transistors. The 80286 processor, with 16 bit architecture, had something like 125 000 tansistors.
Imagine a processor die which would run hundreds of tightly coupled equal 16-bit cores, with a modern clock frequency of 1 GHz.
The arithmetics that a program does can be broken to sub-problems that work within 16-bit number sets. When you need bigger numbers, you could use software emulation of bigger numbers. Accessing large data sets should be done through object methods anyway, not by direct addressing, so you don't necessarily need the traditionally desirable large, flat address spaces.
All in-die processor cores could have a sizable private memory for higly dynamic small objects like run-time system constructs and such.
It should be easy to design and optimize a processor, which is built of small equal, cloned parts.
Of course you would need parellel programming techniques to use the power. But modern languages like Bell labs Alef and Limbo make it fairly easy and starightforward to write highly parallellizable prgrams. Most current systems use threading anyway. The channel abstraction in Hoares "Communicating sequential processes", later in Occam language and in those Bell languages I mentioned works nicely in this kind of arcitecture.
Network cards (Score:1)
The discussion was cut short when it was pointed out that you would have to change all the network cards connected to the same network for it to work.
As far as I know, nobody has tried this yet.
Re:Starting at 1.1GHz? (Score:1)
Re:Damnit.. (Score:1)
Anyone who runs a website which gets mentioned on
Starting at 1.1GHz? (Score:1)
Sure it will be cool to have two processors within a single die, sure it will be cool to have a 500MHz bus... but the article makes it sound like the clock speed will be something really great, while in fact it is a little disappointing.
Re:Network cards (Score:1)
You want to make the battery run faster? How odd. I'd prefer it to run slower, and thus last correspondingly longer.
Specs on the power3...predecessor of the power4 (Score:1)
Re:What took you all so long ? (Score:1)
Memory bandwidth is a problem in all SMP systems AFAIK. Maybe what they were waiting for wasn't the capability to put two or more chip cores on a chip. Maybe it was multiple cores + on die L2 cache to alleviate the memory bottleneck problems.
Re:On the Desktop? (Score:1)
I can't wait!
The guys wrote this article is a dick! (Score:1)
I wanna overclock one of these bad boys
Seriously, the chip runs at over 2 GHz and has a 500 MHz bus, and the first thing Pure Fiction says is I wanna over clock one of these bas boys? Get a life dick.
Re:Starting at 1.1GHz? (Score:1)
I read it in microprocessor report several months ago. It was a very good article but the chip rotation really impressed me.
Nice to see lateral thinking is alive and well!
Run Linux on this? Of course! (Score:1)
Thanks to the folks at Terra Soft: Yellow Dog Linux! [yellowdoglinux.com]
See it in action on a prototype.... Applefritter [applefritter.com]
OSX - Hoza bout it Steve (Score:1)
Re:Two sets of register files (Score:1)
The utilization of this CPU (percentage wise) would be the same as the utilization of just one of the cores. Since it is two cores, and not just one with two register file. One core cannot dump extra instructions into the other core since the internal issue buffers are separate and probably not shared.
Re:Performance (Score:1)
You mentioned threads... yes, this cpu can process two threads at the same time... so can SMP machines...
A thread is a concept of the operating system... by definition, a thread shares the same memory space as other threads of the same task, etc. The CPU has no idea what you want to do, it only crunch numbers/streams of instructions.
The OS schedules the instructions into the CPU. Therefore, in order to crunch vertices 1-1000 and 1001-2000 simultaneously, it is up to the OS to tell the processor to do that, i.e., set the two pc's to correct locations. Assuming this machine does run an OS (most if not all machines does these days) the OS will need to schedule other processes/threads which means overhead... as much as SMP machines.
Your message also contradicts the your previous. If the CPU is two complete cores, the execution units cannot be shared since the issue buffers are not shared. You cannot issue one instruction that resides in one core into the other core, there are no paths connecting those two. The article only mentioned that the L2 can be shared, not other buffers lower down the abstraction.
The architecture they described from the article seems to me like SMP on a single chip, which is different from multithreading. Go read about these two, and learn the differences...
It's funny how a message that is totally wrong still get a score of 2... just shows that the majority of slashdot readers, even the moderators, do not know much about internals of computers... not that I expect them to know though... but it's just funny.
Two sets of register files (Score:1)
But IBM could have changed the architecture since then...
I think the whole point was to keep CPU utilization (and execution core unit utilization) high. Seems to me that slapping two complete cores together is kind of dumb because both chips will still be underutilized... half the silicon will be sitting there unused...
Re:Already here with current chips? (Score:1)
Of course there comes a time when everything needs to be redesigned, and that usually take a lot longer, for example, the Intel Willamette chip... how long has that chip been in development? The last chip that Willamette team designed was the original Pentiums... that was almost 5-6 years ago! (if not longer...)
We are seeing many chips coming out in short periods of time because both Intel and AMD have multiple development teams that leapfrog each other to release new CPUs. Intel, for example, has at least 3 teams, the Itanium team, the Willamette team, and the Pentium {II, III} team.
Re:Performance (Score:1)
This means that in order to utilize this dual core, single die chip, we still need an OS, which means overhead... I would think the overhead is as much as SMP machines.
Re:Performance (Score:1)
This would improve raw performance and will be much better than SMP setups, although I think the overhead _percentage_ would be about the same.
Performance (Score:1)
New Technology (Score:1)
What I want to know is when IBM will make some chips with this technology, seing as how chips will probably push well past 2Ghz?
Overclock that bud...
Do not provoke me to violence, for you could no more evade my wrath than you could your own shadow.
Power3 is PowerPC (Score:1)
Re:overclocking (Score:1)
Re:overclocking (Score:1)
Re:Overclock? (Score:1)
How would you overclock a "production (by production I mean RS/6000 AS/400 type proprietary machines)" type server? This isn't some BX motherboard with clock speed jumpers.
The old fashoned way would probably be the easiest; change the frequency that the chip uses for timing. Either swapping out the crystal or modify the traces that are used to set the timing frequency would do it. That's what the BX boards do.
Remember: bits are bits!
You could "Kryotech" it, but I think there would be vast amounts of cooling already being it 2 chips on one die running at 2 gigahertz even with a .18 micron fabrication.
Cooling is a necessity after you actually increase the frequency...and we're back to the crystal again.
Hoo hoo, Magic Bus! (Score:1)
Re:overclocking (Score:1)
Topher
"I've not met a human I thought was worth cloning, yet. Lot's of cows though." -- Mark Westhusin
Re:overclocking (Score:1)
Do you know any engineers?
I am an engineer; or, at least, I pretend to be one most of the day. I help design chipsets for high-end systems.
Yes, we provide some margin when we set operating frequencies. But we spend an awful lot of time determining the operating boundaries. The word "estimate" doesn't give the full flavor of what we do.
I'm not going to provide details, because they're probably confidential. But I will say this: We know the voltage/frequency/temperature points at which our chips stop working properly. And it's in our interests to push the frequency as high as it will go.
Call me a wimp. Go ahead. But I don't overclock my system, and I definitely wouldn't overclock anyone else's.
Microprocessor Report article on power4 (Score:2)
Re:OverClock (Score:2)
What took you all so long ? (Score:2)
As for this IBM chip. What took you all so long ? SMP on a single chip is an obvious advance. When you vastly increase the amount of circuits on a chip as happens between a Celeron and a P3 without a matching increase in performance something has to give. Why not make that the number of cores on the chip? I hope this isn't patented because it really is obvious.
This brings up something I have been thinking about with the Cruise. If you can convert 32 bit instructions to 128 bit meta instructions and have the finished product run as fast as on the genuine 32 bit CPU.
What if the same technique is applied to an SMP setup in such a way that the software sees the processors as a single CPU. Right now this kind of abstraction is handled by the Operating system and except on the Mainframe that is very inefficient. To the point where 2X400MHz CPUs is a whole lot faster than 4X200MHz.
Now if the whole thing including say 6 CPUs and 2 Megs of cache were put on a single chip at 500MHz to 2GHz, how fast would it be ? My guess is that this could easily be the fastest low end server or workstation chip by a good margin.
Re:overclocking (Score:2)
Yes, you can, if you're prepared to take the risk -- that's the whole basis of overclocking. Chips are rated at the speed the manufacturer can guarantee they'll operate as intended. Say you overclock your chip by 15%. You're now encroaching into the safety margin that the engineers and the manufacturer allowed to be sure that all chips will work correctly. Even so, perhaps 98% of all chips will be OK. Do you want to gamble on whether or not you've got one of the 1 in 50 chips that won't work? Personally, I don't like the odds, particularly when the chips cost as much as this one will...
Re:overclocking (Score:2)
The risk is both damage to the physical hardware and data corruption. The hardware can easily be replaced when it's a cheap Celeron, but not when it's a dual core IBM Power CPU. The data corruption can't be ignored, though. Don't believe me? Maybe you'd like to hear it directly [indiana.edu] from someone you might trust.
Re:overclocking (Score:2)
Maybe they do, maybe they don't. You're missing the point though. If you want faster speeds, go buy faster processors (or more of them). Overclocking is only for those who can't afford to do that. People buying these chips aren't going to fall into that category.
The other point to consider is that overclocking an SMP system is tantamount to suicide, by all accounts. Now maybe that won't be the case here, because the cores are on the same die, and hence will be affected in exactly the same way, but I don't know enough about it to be sure, and I certainly wouldn't risk it.
Re:What took you all so long ? (Score:2)
Depends on what you're doing, my boy. If you're running 4 different CPU-hungry jobs, a 4X200 may well be faster than a 2X400 -- assuming everything else about the processors is equal.
Q: How do I Overclock my Light Bulb? (Score:2)
I've been trying to overclock my lightbulb, and I thought I'd ask you gurus on Slashdot for some pointers. My bulb says "60W" on it, and I want to get it up to 75 or 100.
Not consumer level, thats for certain. (Score:2)
Sure, *nix is, BeOS, and NT (2000) are, but the majority of people still run 9X on their desktops.
Quake 3 and Unreal Tournament support SMP, but there are few consumer level applications that support it. Apparently BeOS can force multithreading, and this is cool, but what we really need are more apps that can take advantage of paralell calculations. Even Carmack states that dual processors running Q3A only increases performance in the most demanding situations.
Even the guys who maintain the Beowulf-How-to (someone is going to post this...) say that paralell computing is great for crunching data, well, IN PARALELL. Quake is not paralell. Clock speed matters more in 3d shooters than overall crunching power (Unless you *like* a slideshow.)
Don't get me wrong, I personally would love to have a machine running either Linux or BSD with one of these things in it (or many) but I don't know what the hell I would do with it.
Until then I will stick with a BP6 and dual-celerons, heck, maybe flip-chips or the new Jalapeno's from VIA/Cyrix.
I think that this is the way of the future, but we won't see it on the desktop for at least 5 years. (IMHO)
I wonder... (Score:2)
Re:How about hundreds of small processors in one d (Score:2)
Re:overclocking (Score:2)
Heck, throw them away every four months and upgrade anyway. Celerons are cheap as dirt, and when overclocked, are as fast as far more expensive P-III's.
What's the risk?
Torrey Hoffman (Azog)
Re:overclocking (Score:2)
Acutally, when I used to work in Ross (used to manufacture CPUs for Suns) in their modules lab, one of the things that we routinely did was to overclock the CPUs (not to mention other nasty little tricks involving soldering, cutting traces on the MB with an exacto knife, etc.). Mostly it's just a matter of providing proper heat sinks and air circulation. So it did actually occur to at least someone.
Re:What took you all so long ? (Score:2)
Enough with the cynicism! This is desktop tech! (Score:2)
Hey guys, are we quick to forget history? The more people get up and proclaim that a given technology is too expensive / not needed / 640k is enough for the desktop, someone goes and proves them flat wrong.
One of two things happens: Consumer technology just blows away these so called "elite" chips, (anyone want to compare one of those "elite" Alpha 150Mhz machines - once a VERY expensive minicomputer chip - with a 1GHz consumer athlon?). The other is that "poof", it appears.
There are issues with semiconductor yields as people mentioned preivously. But with celerons going for $70, it won't be too long before someone figures out how to do it cheaply.
Ahhh, SMP on chip. Long way from the 6502 babyee :)
Kudos
Re:Two sets of register files (Score:2)
Re:Performance (Score:2)
Re:What took you all so long ? (Score:2)
That is correct. To prove this is the case, you can set the affinity (which cpu a thread is bound to). Task Manager | Process | Right-click on process | Set affinity.
(This setting doesn't show up on a single cpu.)
Another quick way to see this is the case is to start up Quake, and look at the cpu utilization. It will be around 50%, meaning the one cpu is taxed, while the other one isn't doing anything.
One means of burning in a new dual system is to run 2 copies of Prime95: one on each cpu.
For fun, I left 2 copies of prime95 and one copy of unreal running overnight. The one prime95 hadn't reached as many calculations as the 2nd one.
Note: Windows NT runs the OS on both processors. It will not run a non-SMP aware process on both cpu's.
For anyone looking for a cheap dual system, this is what I did:
$35 cel/366 o/c to 550
$140 Abit BP6
Hard to beat the price !
Cheers
Re:Superscalar vs. on-die SMP (Score:2)
2. I don't think that such a test (superscalar vs. SMP) would be usefull, as the results would be very, very, VERY heavily influenced by the multi-threadedness (or lack thereof) of the benchmarks, and any two processors available will have enough other differences in architecture to invalidate the tests.
3. Both cores have small (16 or 32 k, I think) L1 caches, but share a large (1.5M or 2M) L2 cache. Furthermore, several chips share L2s via a ring-arrangement of uni-directional 128-bit 500 Mhrz buses, moving things around such that all cached data exists in the L2 of the chip that most recently accessed it, and in no other L2.
On the Desktop? (Score:2)
Re:Starting at 1.1GHz? (Score:2)
OverClock (Score:2)
Always someone willing to ruin good hardware. Is there *anything* you people wont overclock?
Alpha has similar plans for long while now. (Score:2)
Superscalar vs. on-die SMP (Score:3)
It would be really interesting to see the results from using on-die SMP versus a chip that is just twice as wide (2n instructions, instead of n).
Also in question is how the caching is done. Do both cores update the same cache? Or do they operate on separate caches?
Re:Power arch at 500 MHz! (Score:3)
Yeah, IBM and Motorola are in bed again. But it's been on again off again for years now. Don't count on it bein a final merging of the two architectures.
=RISCy Business
Re:What took you all so long ? (Score:3)
1 terahertz is an obvious advance too. Just because its obvious doesn't make it easier. I'm sure that IBM has had prototypes of dual chips on one die before. They wanted the 7000 series(G4) of the Power PC chips to have a high end model that was 4 processors in the processors core. It is just hard to do. Just like it is hard to write an operating system that will make Non-SMP programs utilize SMP. Windows 2000 has "load-balancing" where it will run processes that are processor intensive on the chip that isn't running the OS.
Overclock? (Score:3)
Second of all, good luck on coming up with the cash to buy one. Even if where you worked got one they would still keep it under lock and key tighter than Fort Knox (to all you non-US people, Fort Knox is a place owned by the Treasury department where lots of precious metals are stored. It is locked up pretty tight.). I'm a super user for my network at work, and I'm not even allowed near some of the boxes we have.
Re:Already here with current chips? (Score:3)
This would be different because two threads would be executing simultaneously, so as long as the OS could find two threads that need cpu-time, the hardware would gain a lot of parallelism without having to do more scheduling.
This approach is good because it offers a way to use the excess die space without requiring too much extra effort from the designers. In the last decade or two the # of transistors per chip has gone up several orders of magnitude, while the # of man-years per chip-designer has not come close to keeping pace. It's also nice because the other common approaches are obviously reaching the point of diminishing return.
What Compaq is doing is more interesting though... they are processing multiple threads simultaneously... on the same set of execution units! If one thread doesn't have enough parallelism... that's O.K.. The other 7 can pick up the slack!
Better article on Power4 (Score:3)
The article says the system will have 10 GBytes/second of memory bandwidth and a 45 GBytes/second multiprocessor interface. The article estimates the cache sizes as 1.5 MB for the shared on-chip L2, and 32MB for the off-chip L3 cache. Each processor die has 5,500 pins and attach directly to a multi-chip-module (MCM).
The article also suggests that the system will support up to 32 processors (2 per die x 16), and even more processors using clustering technology.
Looks like this is going to make for a fast server system.
Re:Superscalar vs. on-die SMP (Score:3)
Because of limited instruction level parallelism. Even with a 512 entry reorder window, 256 renaming registers, and a 256-way superscalar architecture, you still won't have ILP beyond about 10 on the gcc component of the spec benchmarks. Furthermore, as you increase the width of a machine, you increase the difficulty of finding all the data dependancies quadratically, since each instruction must be compared with each other instruction. Ultimately it comes down to an issue of decreasing returns, and you find that it is cheaper and faster to run two threads at once than it is to allocated twice as many resources to a single thread.
As for the question of caching, I'd assume that they share the L2 cache the same way as in any other such system -- they share the bus, write to and read from the same cache, and snoop each other's actions. They of course would have their own internal L1 caches, with lower latency.
Power arch at 500 MHz! (Score:4)
In the past, complications with multiprocessor computers has prevented their supremacy of single cpu architectures. I'd love to see IBM succeed with their multicpu chips, as I believe this technology may solve the nagging parallel problems with processor interconnect. And the Power architecture is very nice.
Does anyone know if the PowerPC and Power architectures will finally become one with this product, as was expected with previous Power revisions? Somehow, I really don't expect to see it ever happen, with the way Motorola and IBM have gotten along.
overclocking (Score:4)
Enough with overclocking already. This isn't your $70 Celeron toy. When you get to work +$5.000 chips , you are free to overclock them but I doubt it even occurred to anyone to overclock their $9000 UltraSparc cpu or similar. Yep, overclocking is stupid. flame on
Re:Starting at 1.1GHz? (Score:4)
Power 4
2+ gigahertz
Dual processor on one dies
500mhz bus
LARGE L2 cache (I would imagine 2-4mB
64 bit
-------------------------------
x86 CPU's
1+ gigahertz
One processor on die
200mhz bus (I don't recall the bus of the willamette)
512kB-2mB L2 cache
32 bit
This not something you will see on Toms Hardware. Clockspeed isn't everything. A 500mhz 21264 DEC Alpha is MUCH faster than a 500mhz PIII. The Power4 is not a desktop processor. Compaq will not ship computers with the Power4 processor in them. People need to understand this! When was the last time you saw a benchmark that was PIII vs. RS/6000? I have only seen it once, and that was the PIII Xeon compared to other server hardware namely from Sun and DEC. That was on Intels site.
interesting details (Score:4)
This article doesn't mention the most interesting detail I heard about the Power4: They're supposed to come in small rings of about four chips connected by ultra-high frequency 128 bit uni-directional buses that allow multiple chips to share their L2 caches, with fairly intelligent coherency stuff handled in hardware.
The only bad stuff is that they're really targeting the highend server market, where I want most of that stuff for the low-end too. It's supposed to be 400 mm^2 on a .18 micron process w/ copper, so even after it moves to .13 micron it'll still be too expensive to mainstream use.
Other tidbits include: 1. It's dropping a few of the more complex instructions from it's instruction set and depending on the OS to emulate them, 2. To simplify instruction scheduling, they're keeping track of packets of instructions instead of individual instructions, and 3. The per chip L2 size is supposed to be 1.5 megabytes.
Explanation - Re:What took you all so long ? (Score:5)
> SMP on a single chip is an obvious advance.
Unfortunately if you multiply the amount of circuitry you are trying to deliver in one fully working device, you cut your yield exponentially. This is a SERIOUS problem if your yields aren't high enough to make the exponential nature a small effect.
Say on one wafer you have 30 defects bad enough to wreck whatever chip they are on. Now normally you make 100 chips on that wafer. So (first approximations here, I won't actually do the statistics) 70 chips make it, your yield is 70 percent.
But now you double the size of your chips, so that same wafer now only produces 50. But you still have those same 30 bad defects. Whoops, your yield is now 40 percent. Quadruple the size of your die... Whoops, now you will be lucky to get a handfull of that entire wafer (you're trying to get 25 chips when there are 30 randomly distributed defects... I leave the answer as an excercise for the reader :)
On the other hand if you do the same rough approximation with only 10 super bad defects per wafer, then you go from a 90 percent yield to an 80 percent yield when doubling the die size. No where near as bad an effect on the economics.
So, the only reason they are now considering it is that they expect to have defect rates reduced enough to make it reasonably ecomonical.
-NH
My apologies for avoiding the statistics and actual mathematics, and my examples above use randomly chosen yields. I have an optoelectronics background that is a few years old, back when production yields at some places for III-V QWH Lasers with simple integration with a few other devices had utterly pathetic yields... Like 10 percent!!