G4 vs. Athlon Review 176
heatseeka writes "There is a great article at Ars Technica comparing the Motorola G4 and the AMD Athlon. They discuss every detail of the design of the CPU's, and give credit where credit is due. " Hannibal does a great job dissecting the different chips, as well as explaining the background behind each chip.
Faster or not... (Score:1)
Great article (Score:1)
=======
There was never a genius without a tincture of madness.
Re:Faster or not... (Score:1)
Why PowerPC will win out... (Score:2)
IBM and Motorola are working on putting multiple PowerPC cores onto a single die. IBM has already done this with it's Power CPU's (a sibling to the PowerPC). This is feasible with the PowerPC, since its power consumption is so very low. A G3 at 400 MHz (on .22 micron process) for example uses 8 W max, 5 W on average. A single PIII or Athlon uses at least 4-5 times that much on average. This is due in large part to the complex instruction set that must be decoded and executed. With IBM's and Moto's superior Copper interconnect and SOI technologies, the power comsumption and core size can be reduced further, allowing even more cores on each die. Modern Multitasking, multiprocessing OS's with well written multithreaded apps will scream on these multiple-core CPU's.
AMD Disadvantage (Score:2)
Not much of a comparison (Score:1)
Re:Faster or not... (Score:1)
why you musn't read /. at 3am (Score:1)
"There is a great article at Art Rechnica comparing the Motorola G4 and the AMD Athlon. They discuss every detail of the design of the CPU's, and give credit where credit is due. " Hannibal does a great job dissecting the different chips, as well as explaining the background behind each chip.
i was kindof amused at the thought of an art magazine reviewing the design of a couple of chips and the backgrounds behind them. ugh.
Perfomance in practice (Score:3)
While such a test would be interesting I expect that the results would, in practice, be as much a test of compiler maturity rather than a test of the speed of the underlying system. Despite the best efforts of the processor designers (out-of-order execution and all) these sorts of processor tend to be very sensitive to the compiler technology. Furthermore many of the multimedia and vector processing performance enhancments (SMID vs Altivec) really need to be accessed from assembler at the moment.
Still, rather interesting stuff.
No real information here - way too simplified (Score:1)
They should just leave it MPF which is about the only open source which does publish interesting technical information on CPU architecture.
7400 the Sport Car, K7 the Monster Truck (Score:2)
Let me prefece all this with, I know very little about how these things work exactly so bear with me.
It would seem from the article that although the K7 & the 7400 are pretty comaparble at the moment, the 7400 would have much more room to grow as well as being a much more efficient chip.
To me the fact that the K7 has to decode all this x86 legacy stuff would suggest that the K7 is basically like a brute little monster truck that basically rampages over its flaws by packing a lot of punch, in this case by basically bumping on more and more transitors.
The 7400 seems to produce a sleeker more elegant solution to the whole thing. It's more the sleek sports car with speed, and elegant, efficient power VS the brute force of the K7. So I guess in that regard, the 7400 wins out on efficiency and future sustained growth potential...
Must be said though that a mate of mine owns an Athlon and it rocks the house down, so even if it's like a brute little monster truck instead of the sleek sports car of the G4, it still packs a pretty hefty punch. I guess they both kinda rock the house down...
Re:Perfomance in practice (Score:2)
--
BluetoothCentral.com [bluetoothcentral.com]
A site for everything Bluetooth. Coming in January 2000.
Re:AMD Disadvantage (Score:3)
Both are awesome chips-the difference is degrees (Score:3)
The G4, though, has the advantage of being a lighter-weight chip (fewer transistors needed, fewer instructions, less microcode). As for speed, RISC versus CISC aside, the Motorola/IBM designs have not shown the ability to drive the high clock speeds that Intel and AMD are playing with. Until about a year ago, the two were neck-and-neck, but the X86 chips are now up around 800 MHz while the G4 is just passing 500 now. But given the efficiency of not having to deal with all the microcoded X86 instructions the G4 minimizes the difference in a well-implemented OS.
Another thing to keep in mind (mentioned in the article) is that the G4 is not strictly designed for desktop computers. PowerPC chips are very popular in the embedded market, where they go into single-board computers, automobiles, and all sorts of dedicated hardware. Sales to Apple alone wouldn't keep a chip family alive. Interestingly, Intel sells a lot of older 386 processors to the embedded market too - the too-cool Blackbery 2-way pagers use a 386 processor among other devices.
The best thing that PowerPC has going for it IMHO is that Motorola didn't build backwards compatibility with the M68K series processors. They made an architectural clean break - and the few companies that needed compatibility did it through emulation (parts of the MacOS are still in 68K code today). The ample shortcomings of the MacOS tend to cover up what is a first-rate processor family.
My suspicion as to the 'real' reason Intel has been funding Linux ventures is this: they know that Windows is hopelessly tied to X86, and they are hoping to eventually leave that baggage behind in the IA-64 architecture. Ultimately, X86 will be a drag on clock speeds.
Sorry to have rambled about here some, but I'm still a bit sleep-deprived from the weekend.
- -Josh Turiel
Re:Perfomance in practice-Corrected (Score:1)
I agree such a test would be interesting. Just a quick correction, though: AltiVec is also SIMD, the Flynn classification which is used as a generic term in the literature.(Flynn's seminal paper is where the terms SIMD, MIMD etc. were coined first) AltiVec and SSE, MMX, 3DNow!, MIPSV, VIS etc. are SIMD implementations.
Also, Motorola has a bunch of very nice C libraries and modified compilers which take advantage of the new instructions, and I'm sure some other SIMD extensions have similar C libraries; so it's not exclusively accessible from assembly language. What's probably needed is some very smart compilers to choose when to use the SIMD extensions.
--
BluetoothCentral.com [bluetoothcentral.com]
A site for everything Bluetooth. Coming in January 2000.
Re:No real information here - way too simplified (Score:2)
Re:AMD Disadvantage (Score:1)
mot has C examples for altivec access (Score:1)
http://www.motorola.com/SPS/PowerPC/AltiVec/
and click on "progamming examples"
cheers
Wrong... (Score:4)
Article doesn't discern much (Score:3)
The US Govt is often criticized for implementing obscenely expensive solutions to problems when simple ones would have done the job better. This can be applied the the K7 vs G4 question, for it is always better to have efficiency when the performance is the same.
Rumor has it that Intel is running the new Itanium chips currently with 30 watts of power consumption, over twice that of the G4. If I upgraded my motherboard to the itanium (tm), I would have to get a new case or power supply because of the incredible inefficiency of the chip. That is not novel engineering, but sluggish engineering, something which is not prized in this day in age.
Re:Great article (Score:1)
more alpha undertones (Score:1)
Doesn't matter (Score:1)
Re: MHz differences will fade soon enough... (Score:3)
True, but the gap will lessen (or disappear) in the near future. The G4 has been limited (in clock speed) by it's exceptionally short 4-stage pipeline. Motorola has demonstrated a version of the G4 with a longer 7-stage pipeline that hits much higher clock speeds (~700 MHz range at the demo - higher in production). Each stage is simpler and faster, resulting in the higher clock speed. The K7 already has a very deep pipeline, which is a large factor in its high clock speed.
By Favourite bar graph (Score:4)
It shows power consumption of the major chips in use. Note where the PPC chips are!
Enjoy.
Pope
Clock speeds (Score:2)
The PowerPC doesn't need the high clock speeds of the Intel/AMD chips. On average, it does about twice as much per clock cycle than the X86 chips do.
Comparing clock speeds without consideration of clock efficiency is like comparing the version numbers of the various Linux distributions.
Re:more alpha undertones (Score:1)
Re:AMD Disadvantage (Score:1)
--
The PowerPC = all that and a bag of chips (Score:1)
Granted we don't make Personal Computers but IBM has marketed those processors toward the Linux world.
Re:Both are awesome chips-the difference is degree (Score:2)
I think this is probably not an issue of the Microprocessor design as it is the fabrication capabilities.
Also, the clock speed is not a valid comparison between two different chips, especially two vastly different chips; what you get per MHz is not the same thing; otherwise you would be able to buy and 800MHz Z80. It's like the truck company that built a 2 stroke pickup: people got freaked by the fact it redlined at 4000 RPM.
Re:AMD Disadvantage (Score:1)
"The only reason apple was able to do it is because they didn't have very many macs in use"
There are millions of 68K-based Macs. More to the point, at the time they dumped the 68K instruction set, 100% of existing Macs were based on the 68K line of chips.
"their systems are so proprietery [sic] that whatever they say goes."
For what it's worth, this is true. Unfortunately, you go on to say:
"On the other hand, there is heavy competition in the PC world and there are too many PC's that exist to just suddenly drop support for them."
First of all, dumping the x86 instruction set from new chips will not 'drop support' from the existing PCs. The existing PCs will remain unchanged. What dropping the x86 set would do is remove backwards compatibility in the new chip. This would be a bold move, and has yet to be attempted, in spite of the 'heavy competition in the PC world.'
What is the paradox here? Perhaps 'heavy competition' does not automatically equal 'elegant new technology.' Or, perhaps there is not as much 'heavy competition' as PC-owners would like to imagine. They don't call it Wintel for nothing. Also, few people give Apple credit for heavily competing with ALL the PC companies, all the time, all by themselves.
Which brings us to Linux. Divorced from Microsoft and Intel, what are now ironically called PCs may finally see all kinds of funky new hardware and software technology. Now that IBM is selling PowerPC motherboards, we may soon see G4 and K7 boxes side by side in the computer store--both running Linux. Now that's innovation and freedom of choice!
Mike van Lammeren
Condition code registers and branch predicition (Score:3)
Unlike all the other similar features mentioned in the article, these can not be retrofitted into the K7, because it is limited to the x86 instruction set, which does not have this concept.
Basically, any instruction which needs to check the result of an operation (such as a compare, or overflow from an arithmetic operation) has to use condition codes. But in a pipelined processor, the result of the operation usually has to wait until the instruction has finished going through the pipeline. Rather than wait this long to decide what to prefetch, branch prediction tries to guess whether or not the branch will be taken. The predictions are usually right, but not always. What if there is more than one such comparison close together, particularly if the result is not being used directly for a branch, but for a boolean expression?
What the PPC does is have multiple (7?) condition code registers. When an operation such as a compare is done, you select a condition code register to receive that result. In the same way that code can be optimized for RISC by interleaving multiple threads of operations such that the result of an operation isn't used until three or four instructions later, the condition code register usage can also be interleaved.
With out-of-order execution (OOO), the CPU automatically rearranges instructions to achieve this interleaved usage of registers. And thusly, the PPC will gain this advanatage with condition code register usage as well.
Re:AMD Disadvantage (Score:1)
It's true Apple's systems are proprietary, but so what? There is no reason that AMD could not do the same thing. Suppose that AMD and Microsoft co-developed a new ISA, then developed a software emulator for the old x86 architecture, and Microsoft began writing Windows to run on both architectures. I see no reason that this could not happen, except that Microsoft may not have the ability or desire to do it. (remember NT on Alpha?) But that's a failing on the part of Microsoft, not a limitation of non-proprietary hardware.
In fact, if Microsoft had stuck to its original plan to develop NT for multiple platforms, then this could have been done quite easily once they started selling NT as a consumer product. I don't see what proprietary hardware has to do with it. Apple just has more guts than the Wintel world does.
Re:No real information here - way too simplified (Score:1)
Re:Both are awesome chips-the difference is degree (Score:2)
Re:more alpha undertones (Score:1)
The 21264, therefore, can be safely called a "demoniac."
Future Alphas will start to make use of Simultaneous MultiThreading to run threads at the CPU level. I don't know what form this will take, initially.
--
Re: MHz differences will fade soon enough... (Score:2)
Re:AMD Disadvantage (Score:1)
"There are millions of 68K-based Macs. More to the point, at the time they dumped the 68K instruction set, 100% of existing Macs were based on the 68K line of chips."
And how many x86 PC's are there?
"First of all, dumping the x86 instruction set from new chips will not 'drop [software] support' from the existing PCs."
But will new programs run on the old systems? Will old programs run on new systems? Will there be linux software for it, such as a compiler, or a kernel? That's what I meant in 'support', or rather software support.
"This would be a bold move, and has yet to be attempted, in spite of the 'heavy competition in the PC world.'"
The idea is that there are so many PC systems that they will not just change to this new architecture because of existing applications that run with these systems. The 'competition' would just take over the market. Look at intel. The reason that they were so successful in the beginning is because they were associated with compatibility. The reason that PC's (as in x86 architectures) were successful is because they were more open and led to more competition. Hmmm.... Maybe this so called 'competition' does play as an incentive not to drop 'support' for the existing technologies(the x86 architecture), as in not making chips that 'support' the old instruction set. There would be no software to run on it.
"What is the paradox here? Perhaps 'heavy competition' does not automatically equal 'elegant new technology.' Or, perhaps there is not as much 'heavy competition' as PC-owners would like to imagine." How many companies make G4 processors? How many companies make mother boards for these? There only appears to be one company. Naturally there must be competition if there's only one company...
"few people give Apple credit for heavily competing with ALL the PC companies, all the time, all by themselves."
The only reason they are competing is because they have pretty computers. I really don't see how they got as far as they have, as they sell over-priced, under-powered machines. How much is the high end G4? How much are Dell's even higher end workstations? There doesn't appear to be much innovation, or any competition to reason any innovation, there...
"Which brings us to Linux. Divorced from Microsoft and Intel, what are now ironically called PCs may finally see all kinds of funky new hardware and software technology. Now that IBM is selling PowerPC motherboards, we may soon see G4 and K7 boxes side by side in the computer store--both running Linux. Now that's innovation and freedom of choice! "
That would definitely be cool!
Re:AMD Disadvantage (Score:1)
This is the point of the original post--the support could be provided in software, saving space on the silicon and allowing native apps to run faster and cooler. Many apps don't need the kind of horsepower that modern chips have, and the few that do can be released as "fat" apps like was done on the Mac. The whole point is that there *were* apps to run on the Power Macs-- in fact almost every 68k app ran on the new Macs, just not as fast as they otherwise would.
The only reason they are competing is because they have pretty computers. I really don't see how they got as far as they have, as they sell over-priced, under-powered machines.
Yes, they make "pretty computers," and yes, it is kind of silly. But that is far from the only reason. Linux/Solaris/*BSD is simply not an option for most users. And although Windoze is improving, it is still uglier, more bloated, less intuitive, and less integrated with the hardware than the MacOS. Style matters. Ease-of-use matters. Reliability matters. And MacOS is still the best choice in all three areas.
You're probably a Linux geek, and that's fine. But let me point out that the Mac is not designed for you. The fact that you are not impressed with it does not make it an inferiour product. This engineer's myopia of judging a product based solely on raw performance and price is not helpful. There's more to a computer than it's MHz rating and its hard drive size.
Misleading graph (Score:2)
This would be a good graph if your main concern is raw power consumption of a normal processor purchase (I'm sure that you could get cacheless Athlons if you buy enough).
Re:Faster or not... (Score:1)
Re:AMD Disadvantage (Score:1)
The only reason apple was able to do it is because they didn't have very many macs in use...
OK, that's just stupid.
This is one of Apple's biggest advantages when it comes to industry-leading innovation, although it is a bit annoying when you're thinking of making a purchase. Fortunately they're dumping most of their proprietary (or just different) hardware in favor of more commonly accepted standards (HD-15 instead of DB-15 monitor connectors, USB instead of ADB, no more 8-pin mini-DIN serial ports...).
On the other hand, there is heavy competition in the PC world and there are too many PC's that exist to just suddenly drop support for them.
Once again, you're on crack.
Chances are that any company(AMD, Intel) that just dropped the x86 instruction set would lose a lot of business and nobody would support it. They will always have to maintain compatibility with previous generations and just add new instructions to make it more powerful.
This is the view that is currently held, as is evidenced by what products are out there, but with the rise of Linux (and, to a lesser extent, other open-source operating systems), processor-dependence is becoming less important, and chipmakers know this. And remember, AMD has also made a bold move, and been successful - they released a processor with no motherboard support whatsoever, and lo and behold, we have Athlon motherboards now. If AMD says they're making a new processor that uses a new instruction set, you can bet that Linux will support it before too long.
I think that it would be great to have a new RISC based chip for the PC (it really wouldn't be a PC then), but it just isn't practical.
Um, PC means Personal Computer. If you change nothing but the processor, you also have to change the software (new Linux distribution, recompile, woohoo, not that hard, just like all the other architectures Linux runs on [PowerPC, Alpha, Sparc...]).
Even Faster (Score:1)
Basically, that would allow one to run legacy apps by allowing the Athlon to operate as a smokin' fast x86... and run new apps by allowing the Athlon to operate as a smokin' RISC machine.
And if this could be done in a multitasked environment, so much the better -- running a legacy app *and* a new app simultaneously.
Too much to hope for, I'm sure!
Re:Faster or not... (Score:2)
Now, I am too tired from the weekend (partying Friday night and working early Saturday morning - y2k testing) to chase the links and give you the exact comparisons between the fastest of each chip or MHZ comparisons but someone has already done the testing and they can be easily found with a web search.
--cheers & happy new year!
Dan
People actually care about power consumption? (Score:1)
Why are people trumpeting that the G4 only uses 4 or 5 watts, or whatever? Who cares? The Alpha CPUs, widely regarded as the fastest money can buy, use something like 100 watts, but I'm sure the people that buy them really don't care about that either.
- A.P.
--
"One World, one Web, one Program" - Microsoft promotional ad
Re:People actually care about power consumption? (Score:2)
Power consumption is *IMPORTANT* if you're making laptops or embedded systems, correct?
Pope
Re:People actually care about power consumption? (Score:1)
Re:Faster or not... (Score:1)
Re:By Favourite bar graph (Score:1)
John
Re:Article doesn't discern much (Score:1)
Re:AMD Disadvantage (Score:1)
you ARE joking, aren't you? (Score:1)
You are a psychopath (Score:1)
PS: I don't like the G4's either, but you seem to have had just a little too much to drink, or not enough to think that.
Re:Mac OS more Stable than NT... LOL (Score:1)
7.6 and earlier was buggy as hell, and if you're running wacky programs Windows might handle it better. But I'm currently running Mac OS 9, and it only crashes about every two weeks. I think it depends a lot on system configuration. A well configured system on any platform will be more stable than a poorly configured one. But OS 9 is extremely stable if you don't abuse it much.
One advantage that increases Mac OS stability is the smaller number of machine types and more tightly integrated hardware and software. It might be that once you install all the right pieces Windows is as reliable, but getting it there is a pain. And Windows isn't very helpful. Apple has worked hard to ensure that every version of their OS support every recent machine. Their latest (OS 9) supports any machine with a PPC, which will take you back 6 years.
Anyhow, system stability has improved dramatically since system 7, and at this point system configuration is more important than OS features. And it looks like Mac OS X will be out long before Microsoft's NT consumer arrives, so that'll widen Apple's lead in this area.
BTW... (Score:1)
Do some reading before you spout off nonsense (Score:1)
In a word, yes. He knows an awful lot. If you don't know what "Post-RISC" is talking about, why don't you read his article on RISC vs CISC like he suggested. Here, I'll make it easy for you: RISC vs. CISC: the Post-RISC Era [arstechnica.com]
Of course, it also looks like you didn't even bother to fully read this article. Hannibal is hardly anti-mac, anti G4, or any of that. He concludes that he prefers the G4 over the Athlon (and the Alpha over both of them). Give me a break...
--------
Re:No real information here - way too simplified (Score:1)
Re:AMD Disadvantage (Score:1)
Well, ok, in most areas Linux beats Mac OS in the reliability department. But both beat Windows, and Linux is so far behind in the other two categories that it's not even an option for most users.
Where I work I admin a MacOS X Server machine that provides AppleShare (and Samba) file services. There is a web interface that lets you control the AppleShare file server remotely. Once of the things it does is give you a table view of all the current connections and how long they've been connected. Below is a list of the current active connections connect times. What this means, of course, is that each of these client Macs has been running at least as long as they've been connected to the server. I was rather surprised at some of these connect times:
Total Connected Users: 18
Connected For...(day hr:min)
42 06:22
31 08:47
31 08:42
28 07:27
27 08:46
24 00:42
21 06:59
19 04:39
18 05:26
17 05:45
17 03:08
14 03:23
13 06:41
13 02:37
12 02:14
7 08:45
6 08:20
2 05:27
Re: MHz differences will fade soon enough... (Score:1)
Re:By Favourite bar graph (Score:1)
So, remind me why I'm supposed to care about CPU power consumption?
Re:Article doesn't discern much (Score:1)
Can we say "Kryotech"? I think thats cool enough.
And how much additional power does the Kryotech cooling unit use?
Re:silliness (Score:1)
Re:By Favourite bar graph (Score:2)
Wow...the PowerPC chips use a lot less power than the Intel chips. No wonder Mac laptop battery life is so much...oh, wait...Mac laptop battery life is the same as comparable PC laptop battery life.
Do they have the same battery life? According to Apple's web site, the iBook runs up to 6 hours on a single battery, the PowerBook runs up to 5 on a single battery, and you can put 2 batteries in it (one would go in the spot normally used for the CDROM/DVD drive.) I went to Dell's web site and for two of the three laptops available, the times were 2.5 hours/battery and the third was 3.5-4 hours/battery. I didn't take a lot of time to look, but I assume that you can put two batteries in if you forgo the CDROM/DVD drive. So, you were saying?
Re:People actually care about power consumption? (Score:1)
x86 Athlon "Frontend" (Score:1)
And, is this the type of idea that the Crusoe processor is going to have - i.e. modulating the frontend on a chip so it can run many architectures?
The mind ponders.
Re:Even Faster (Score:2)
Basically, that would allow one to run legacy apps by allowing the Athlon to operate as a smokin' fast x86... and run new apps by allowing the Athlon to operate as a smokin' RISC machine.
Why does evryone assume that x86 is somehow bad? Has the Apple "RISC is god" propoganda gotten to you? This issue has been discussed on Slashdot before. x86 is fine. Don't fix what isn't broken.
Besides, the Athlon goes 700 Mhz. If that's not fast enough, I suggest buying a Cray
---
Heat Dissapation? (Score:2)
Re:Both are awesome chips-the difference is degree (Score:1)
Re:By Favourite bar graph (Score:1)
Secondly, Intel boxen use stripped-down "mobile" cpus. Last I heard, a "mobile" p2 is about a quarter the speed of a similarly-clocked "normal" p2.
Whereas Apple gets to use exactly the same chips in both desktops and laptops. They consume less power not only in direct dissipation, but in not needing to use a fan to deal with heat, and they turn out to be dramatically faster.
Apple has still never shipped a machine that needs a cpu fan. A heat sink is common, but they're often as small as a quarter on current laptops.
Re:Article doesn't discern much (Score:1)
Re:AMD Disadvantage (Score:1)
With Alpha/NT Digital bore the costs of creating and maintaining the Alpha versions of the OS, tools and applications.
What if AMD did the same thing. Imagine a version of Win2K that was compiled to native K7 code. AMD could re-use the FX!32 stuff that Compaq had put into the kernel. Then they could create a K7-lite that didn't have the x86 baggage.
That would be the el-cheapo version. Then you've got the normal K7 that can still run x86 apps.
It would be an interesting experiment and probably scare the crap out of Intel.
Of course, this could also be a crack-pipe fantasy.
P.S. NT on Alpha died when Compaq decided they were pulling the plug. Since Microsoft doesn't sell any hardware, why should they support an architecture that the manufacturer doesn't support anymore?
Re:Clock speeds (Score:2)
numbers of the various Linux distributions.
Plus, a lower clock speed has much less design issues than a higher clock speed. Lower clock speed chips can use cheaper fab processes, the motherboard can be laid out without as much concern to cross-coupling/transmission line problems, clock skew (very important), and a host of other problems inherent with high speed/microwave design.
No, 7400 ~= VTEC engine, K7 ~= 454 Chevy (Score:1)
MOTO 7400 in ATX form-factor Motherboard? (Score:1)
Ahh, there's the rub. Or should we say ``the fly in the ointment''? If a screw-driver shop can't build it, then my interest in the G4 remains but academic. I really don't have a use for ``boutique'' computers.
Athlons still dont work with stable debian (Score:1)
Re:Another Slashdot Genius (Score:2)
The point is that there is someone out there who can do better. Not necessarily the person posting.
Re:Athlons still dont work with stable debian (Score:1)
Debian 2.2 (potato) if you insist on Debian
Re:Even Faster (Score:2)
Imagine if AMD could take all that effort it spent on making that aging x86 instruction set work with their spiffy new processor and put it into making the processor fast instead? Rather than a 700MHz x86 processor, you'd probably have a 1GHz or higher RISC processor that would make the current K7 and G4 look like snails.
Only if we're talking about laptops (Score:1)
Re:AMD Disadvantage (Score:1)
OS support is the biggest reason the Apple-style emulator approach wouldn't work in the PC world. DOS still does matter, as does OS/2, NetWare, BeOS, and old versions of Windows. Since it's unlikely that a emulator would ever be added to these OSes, you'd be cutting yourself off from the 'legacy' market, and essentially end up like NT/Alpha.
--
the problems with "post-RISC" processors (Score:1)
In true RISC processors (MIPS and SPARC as good examples) the compiler can and does produce an efficient pipeline w/o stalls of any sort. Which means that OOO is wasted on such a chip.
The problem comes in with inefficient compilers and poor resource management. To solve this, hardware dvelopers introduced things like OOO and thread forecasting to allow bad compiling alg's to retain their top speed. But this leads to a bad trend which is quite apparent in the x86 compiling community, allowing the hardware to schedule processes in the end does not encourage compiler developers to make better and better compilers, instead allowing them to "scoot by" without ever learning the art of mastering the pipeline.
There is a new architecture arriving shortly, called VLIW, which is at it's heart nothing but taking the RISC concept one step furthur, forcing the compiler to take it's motley crew of commands and set them up to pipeline efficiently. There are more differences than this, I know, but at it's heart this is pretty much what it is. A good compiler will produce better code than any OOO engine out there can ever manage. Why? Because the programmer knows what he wants, the processor does not. No matter how smart, how many transistors you throw at a problem, a smart programmer with a smart compiler will always turn it on it's head.
Good article (Score:2)
Regarding some other comments:
Room to grow (and to go-go-go) (Score:1)
More to the point, the PowerPC post-RISC strategy is a lot smarter than Intel/AMD's. Mix'n'match logic to create the perfect balance of processor units, and slap four cores on a single slab of silicon. This is where PowerPC is going to be -at- next year.
The problem has been Motorola lagging behind the group. IBM has signed on to crank out the next rev of the G4, and clock speeds are due for a major boost: they demoed 1GHZ tech on the PowerPC platform 2 years ago, and are cooking up 9ghz technology in deep R&D as we speak. Even with the MHZ gap, the design of the G4 is smart enough to keep up with the "big brutes" crunching x86, and the POWER4 (IBM's 64 bit PowerPC implementation for it's RS/6000 Unix and AS/400 "baby mainframe" systems.) will leave IA-64 in the dust before it even gets out of the gate.
Smart money is on the original RISC R&D houses for the edge in the post-RISC sweepstakes: wait'll you get a load of what the Alpha has in store...
SoupIsGood Food
condition codes - yuck! :) (Score:1)
The thing that one should note is that, from a CPU design point, condition code registers are often a Bad Thing(tm) - as are instruction "side effects" in general (condition codes being one example). Particularly with superscalar, out-of-order CPUs. They add complexity to the processor's attempts to re-order instructions on the fly. This is in part because the CC regs tend to create additional dependencies between instructions, and often the CC register(s) become a bottleneck. Also, when the CPU tries to reorder instructions, it has to worry not only about whether the reordering will clobber a needed result, but also whether a moved instruction's side effects will do something unintended (ie the moved instruction changes the condition code, and so may not be able to be moved without upsetting the code semantics). These factors tend to constrain the CPU's out-of-order execution unnecessarily, leading to less optimal performance.
Most processors try to avoid CC regs, since they are a centralized resource that can easily become a bottleneck when multiple compares come close together. For example, MIPS (as most RISC CPUs) doesn't use CC regs; if you want to check a condition, there is a compare instruction which will set the value of an integer register. Since said integer register can be any of the many integer regs, rather than a small set of particular registers, there generally aren't the bottlenecking problems like with condition codes. Condition codes are really a CISC-ish thing anyway, and in general RISC processors avoid them.
Adding additional CC registers helps somewhat with loosening that bottleneck, but is really a bit of a kludge around a nasty instruction set artifact in PPC. Not exactly something you'd want to "retrofit" onto an x86 CPU (which has more than its share of nasty instruction set issues already).
Native Athelon mode? a native Linux? (Score:1)
I'm not saying x86 architecture is all that bad, but I bet a version of something like Linux optimized to run natively on the Athelon would outperform an x86 version.
Its x86 compatibility, when needed, is definately a Good Thing(tm) at present - for emulating/running those lesser-operating systems. Can your G4 run windows like a 700 Mhz Pentium?
You'd think that AMD could get a better hold of the 32 bit market by allowing their own technology become a new standard. After a while, they could have chips (maybe marketed for servers at first) with no x86 emulation at all. It wouldn't be needed once their architecture established itself.
Maybe I'm dreaming but I'm just looking for something better on a small budget...and backwards compatibility is nice every once in a while. Is this even possible with the current Athelon?
~J
Re: MHz differences will fade soon enough... (Score:1)
Eventually each company will own half the earth and they will fight wars with huge robots ala Mechwarrior. When that happens, sign me up for the cause.
Re:Mac OS more Stable than NT... LOL (Score:1)
But then, there is the configuration factor you mentioned at work here too. My Win98 machine is pretty solid, with not too many modifications from standard setup. The Macs I use (for some of my music classes at school) may or may not be set up all that well, as I have no idea of the competence, or lack thereof, of the Mac admins.
I personally wonder how much of the grief that gets blamed on Windows is due to Windows, and how much due to poor quality hardware. On machines with quality components, I've rarely had any sort of troubles with Windows. Things start getting shaky once you start with some of the really cheap hardware, though, in my experience. I do have to give M$ credit (*gasp*) in the sheer volume of supported hardware they have, and yet maintain a reasonable level of stability.
Apple does have the advantage of a relatively small set of hardware to support. So I would hope they would be able to get a decent OS on it
Re:x86 Athlon "Frontend" (Score:1)
Presumably so. The decoder might actually be simpler than the x86 one. Doing so would likely require some modifications to the back end also (even though the CPU works in internal ops, it has to write results to memory of only whole x86 -or in our theoretical version, PPC - instructions, not these ops. After all, what does having executed half an instruction mean from the program's point of view?) Though, if you did this then the Athlon is still essentially hindered by having to do a PPC->internal op translation. Plus, the choice of what to put in your core depends on what sort of code you expect to be running. It could be that PPC code has different characteristics than x86, and so the core should be modified to be more efficient at running PPC...you can see this could end up becoming more of an entire redesign rather than just "slap-a-different-front-end-on-it". But if you don't care about efficiency quite so much, the modifications to run PPC might not be all *that* terrible.
And, is this the type of idea that the Crusoe processor is going to have - i.e. modulating the frontend on a chip so it can run many architectures?
Essentially. The difference (AFAIK, as Transmeta hasn't officially said anything yet) with Crusoe is that it consists of a hardware part and a software part. The software part is responsible for decoding the instructions and telling the hardware what to do. The presumed benefit is that since the decoding is in software, it can be programmed to handle any instruction set pretty easily; and that this means you don't have to spend tons of transistors on complex decode logic like in the Athlon. So Crusoe should be less expensive to build.
However, software is normally slower than hardware. Transmeta's bet is probably that the savings in the decode logic will allow them to make the chip faster, offsetting the performance penalty of doing software-decode. Also, not having all that decode hardware should decrease the power consumption of Crusoe as compared with the Athlon and similar chips. Transmeta has hinted that Crusoe will be aimed at the mobile computing market, so this makes some sense.
Re:Doesn't matter (Score:1)
Re:OS/2 doesn't matter (its deader than the Amiga (Score:1)
Re:Article doesn't discern much (Score:1)
And a G3/400 laptop--the whole thing--only has a 45W power adapter. That'll run the hard drive, the DVD-ROM drive, the processor, the screen, the screen backlighting, the speakers, any external bus-powered USB devices...and charge the battery. All at once.
Oh, and it'll also let you slow the processor down to extend battery life; you know, like that amazing new feature Intel's announced they'll have Real Soon Now?
Admittedly, that's the G3 instead of the G4, but even if the G4 itself is using twice as much power as the G3, you'd need what, a 60W adapter?
Re:Even Faster (Score:2)
IIRC, some people were talking about why the G4 is 450Mhz (stable), whereas you can buy 1Ghz (stable, kryotech) K7s. This is more than 2x the Mhz rating. You can also buy 700Mhz K7s without Kryotech stuff, which is 1.5x the Mhz rating..
On par? No....
The designs of the G4 and the K7 chip are completely different. The K7 is like the Saturn V, whereas the G4 is like the concorde. They use different fuels and different speeds, but are both fast enough to get me from point A to point B faster than I can appreciate. So the comment that "Rather than a 700MHz x86 processor, you'd probably have a 1GHz or higher RISC processor that would make the current K7 and G4 look like snails." makes no sense, as to me the G4 (at its paltry 450Mhz) looks really damned fast. I can't even concieve of how fast 1Ghz would be. (Besides, how do you know that removing the x86 translator unit would speed anything up? Where's your EE Phd?)
Here are my points:
1) x86 works fine. I have plently of working knowledge of how to program in asm for this instruction set, and have plenty of proven working software for it (think Linux). The "flaws" you talk about are the same ones the RISC community rolled out when Intel had 200Mhz PPros out for more than 6 months before releasing a new CPU design. ISA zealotry annoys me, and doesn't help your possibly legitimate case at all.
2) The x86 is currently really lots faster, even if it's still too fast for me to notice (except in RC5 rate). So why strive to go even faster, faster, faster, when things are already faster, faster and getting faster (Moore's law).
3) An all new RISC design (like the implementation in the K7, K6-*, PPro, PII, PIII, Celeron cores) would not have any software support comming out the doors. The reason they have these micro implementations is to allow them to add a layer of complexity to the chips, and make them perform. They change the internals every generation, using different micro RISC cores. Once they sat down and used one for their flagship chip, they'd be stuck with it and lose the flexibility that the cores give in the first place. x86 is a nice general instruction set with instructions for whatever you need, which allows them to emulate it in any way they want (think Transmeta).
4) Have you noticed how the Sparc32, Sparc64, m68k, and a few other branches of the Linux kernel are not really well supported? It would take time even for Linux to come to bear on this new architecture. I'd rather have 1Ghz Linux now, rather than a "possible" 1Ghz Linux on some new architecture.
5) AMD, Intel, et all, have an investment of years in the x86 chip business. It's what makes them their tons and tons of money. Why would they throw away the backwards compatibility that gives them oodles of dollars, just to become another bit player in the RISC business (which isn't worth nearly as much)?
Anyways, I'm ranting a bit because you are acting just a bit like a Zealot. I praise you for being able to look ahead, but you seem to have a bit of a problem looking to now. AMD wants x86 dollars, and they are getting them.
---
Re:By Favourite bar graph (Score:1)
Re:OS/2 doesn't matter (its deader than the Amiga (Score:1)
My point was that that there's a very small market for a low-end PC that can't run 'legacy' or old OSes. If you wanted such a thing, you might as well just buy an Alpha.
--
Re: Pointer to (rational) article on endian-ness? (Score:1)
Re:Why PowerPC will win out... (Score:1)
But die size will affect yield and price. I don't know where MOT and INTC are relative to each other re: process efficiency, but as they adopt smaller geometries and funky processes die size might become a gating factor to managing yield.
As the margins on the rest of the computer get driven into commoditized oblivion the price of the CPU will rise in relative importance.
(of course scale matters, too, but MOT should be able to keep the volumes up w/the embedded market)
Re:Why PowerPC will win out... (Score:1)
-Joe
Re:AMD Disadvantage (Score:1)
Re:AMD Disadvantage (Score:1)
Compaq provided the funds and engineers to port MS products to Alpha/NT.
All MS had to do to support Alpha/NT was to continue to ship the binaries on the CDs. It was Digital/Compaq that was doing all the work. Don't forget. Every NT CD that went out the door already had the Alpha binaries on it. That includes all the Back Office stuff, IE and service packs.
Re:Even Faster (Score:3)
Here's an example: in IA32, an instruction can start anywhere. Addresses divisible by 4, addresses divisible by 2, odd addresses. They can be of many varying lengths. This causes massive problems for pipelining and instruction decoding. If you're trying to decode three instructions at once, how do you know where instruction #2 is until you've finished with instruction #1? After all, they can be many different lengths. Now, clearly, this is not an insurmountable problem, as Intel and AMD have both pulled it off quite well. However, it does go for added expense. That money could be either translated into lower-cost chips, or more features for more speed, were it not for the necessities imposed by the aging instrucion set. The PowerPC ISA (I talk of the PPC because it's what I know best, I believe others are like this as well) has instructions that start on addresses divisible by four. They are four bytes long. Period. Thus, it's very easy to see where instructions #1, #2, and #3 are, because each one is the same length. Barring a branch, it's simple to start decoding multiple instructions at once.
Now, you sound like you don't seem to care too much about speed. I don't blame you. I'm typing this on a 300MHz computer that I got, new, not more than two months ago. It's plenty speedy for me. Cost, however, is another thing. Rather than putting those saved dollars toward more features for more speed, they could simply pass those along to the consumer. Also, fewer transistors around to support cruddy legacy "features" means less power consumption which means a smaller electricity bill. Particularly significant if your computer is on all the time. Maybe it's not that big a deal for you, but it's something to consider.
I agree that IA32 works fairly well. Cars with carburetors worked fairly well too, but now almost everything has fuel injection. Propellors on airplanes work pretty nicely in a lot of cases as well, but whenever a big job needs doing, jets have replaced the propellor. Even in a lot of smaller prop-driven airplanes, a turbine drives the prop instead of a piston engine. I could name more, but I think the point is made. IA32 works, but that's no reason to not wish for something better. I realize it's unrealistic, but I hold out hope that these companies that make so much money off of this market will decide to use their massive resources to make something truly new and good, rather than just sucking up profits from more of the same.
Btw, about the concorde, it still takes three hours in an unreasonably small cabin to cross the Atlantic. Or so I hear.