Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

G4 vs. Athlon Review 176

heatseeka writes "There is a great article at Ars Technica comparing the Motorola G4 and the AMD Athlon. They discuss every detail of the design of the CPU's, and give credit where credit is due. " Hannibal does a great job dissecting the different chips, as well as explaining the background behind each chip.
This discussion has been archived. No new comments can be posted.

G4 vs. Athlon Review

Comments Filter:
  • Comparing these two processors just isin't as exciting as the Intel AMD fight because they don't run the same software. Its like comparing apples too...ok I'll stop. underwear goes inside the pants, peanutbutter outside
  • This is one of the best CPU comparison articles I have ever read. He goes into good detail about the differences in design between the 2 CPU's, and makes it pretty easy to understand. The one point he makes quite clear is that although each unit has it's advantages and disadvantages, their is no clear "winner" so to speak (he does favor the G4 though). I look foward to his next article about the altivec engine.
    =======
    There was never a genius without a tincture of madness.
  • Hmm...I tend to agree. Why compare apples to oranges when some people naturally prefer apples to oranges and vice versa? Either way, someone is going to disagree with the results and either way the majority will just keep on using the processor that they like.

  • IMHO, the PowerPC has more "room to grow" than any x86 chip, whether Intel, AMD, or otherwise. The reason is this: Power Consumption.

    IBM and Motorola are working on putting multiple PowerPC cores onto a single die. IBM has already done this with it's Power CPU's (a sibling to the PowerPC). This is feasible with the PowerPC, since its power consumption is so very low. A G3 at 400 MHz (on .22 micron process) for example uses 8 W max, 5 W on average. A single PIII or Athlon uses at least 4-5 times that much on average. This is due in large part to the complex instruction set that must be decoded and executed. With IBM's and Moto's superior Copper interconnect and SOI technologies, the power comsumption and core size can be reduced further, allowing even more cores on each die. Modern Multitasking, multiprocessing OS's with well written multithreaded apps will scream on these multiple-core CPU's.

  • AMD doesn't have the luxury of being able to throw away (x86 ISA) whatever it wants to. They are competing with Intel for the PC market. The G4 is a lock to appear in new Macs.
  • I would have to partially that with a previous post about the chips being uncomparable. What is a better mode of transportantion? a 1976 Jeep CJ or a 2000 NSX. Obviously a jeep is better for one thing but an NSX is amazing for others. Same quite uncomparable. but if you compare things like how they handle in their respective targets? that is answerable. The CJ is one of the besthandling stock off road vehicles made, and the NSX is one of the best onroad handling cars. They both handle very well and use different methods to get to their equal level of quality. Now Back to the chips the article. I think the article shows more that they are both great at the same things in their target applications as opposed to who plays quake better. Good article, Good Comparision, (my new Athlon 700 smokes..)
  • They both can run GNU/Linux.
  • just a quick reminder ppl, don't try reading /. during ungodly hours of the night. i swear to god i looked at this article and read:

    "There is a great article at Art Rechnica comparing the Motorola G4 and the AMD Athlon. They discuss every detail of the design of the CPU's, and give credit where credit is due. " Hannibal does a great job dissecting the different chips, as well as explaining the background behind each chip.

    i was kindof amused at the thought of an art magazine reviewing the design of a couple of chips and the backgrounds behind them. ugh.
  • by nickovs ( 115935 ) on Monday January 03, 2000 @07:58AM (#1410136)
    This artice is very informative with respect to the architectures but a useful follow up would be to look at the performance in practice. Both of these processors can be used to run Linux and it would be rather interesting to see how a pair of workstations faired in a side-by-side test.

    While such a test would be interesting I expect that the results would, in practice, be as much a test of compiler maturity rather than a test of the speed of the underlying system. Despite the best efforts of the processor designers (out-of-order execution and all) these sorts of processor tend to be very sensitive to the compiler technology. Furthermore many of the multimedia and vector processing performance enhancments (SMID vs Altivec) really need to be accessed from assembler at the moment.

    Still, rather interesting stuff.
  • Why is every article on CPU archtecture on websites like Ars Tecnica written at a fourth grade level? I don't need a review of branch prediction and instruction decoding. Sheesh!

    They should just leave it MPF which is about the only open source which does publish interesting technical information on CPU architecture.
  • Let me prefece all this with, I know very little about how these things work exactly so bear with me.

    It would seem from the article that although the K7 & the 7400 are pretty comaparble at the moment, the 7400 would have much more room to grow as well as being a much more efficient chip.

    To me the fact that the K7 has to decode all this x86 legacy stuff would suggest that the K7 is basically like a brute little monster truck that basically rampages over its flaws by packing a lot of punch, in this case by basically bumping on more and more transitors.

    The 7400 seems to produce a sleeker more elegant solution to the whole thing. It's more the sleek sports car with speed, and elegant, efficient power VS the brute force of the K7. So I guess in that regard, the 7400 wins out on efficiency and future sustained growth potential...

    Must be said though that a mate of mine owns an Athlon and it rocks the house down, so even if it's like a brute little monster truck instead of the sleek sports car of the G4, it still packs a pretty hefty punch. I guess they both kinda rock the house down...

  • I agree such a test would be interesting. Just a quick correction, though: AltiVec is also SIMD, the Flynn classification which is used as a generic term in the literature.( AltiVec and SSE, MMX, 3DNow!, MIPSV, VIS and whatnot are SIMD implementations
    --

    BluetoothCentral.com [bluetoothcentral.com]
    A site for everything Bluetooth. Coming in January 2000.
  • by asparagus ( 29121 ) <koonce@NOSPAM.gmail.com> on Monday January 03, 2000 @08:16AM (#1410140) Homepage Journal
    Back when Apple started using PPC's, they threw the entire 68k instruction set out the window. They provided an emulator for PPC, and then let the raw speed of the PPC platform gradually replace the older programs, which were quickly rewritten for the new processor. Now, (9?) years down the road, the PPC is slim and trim. It's a pity Intel/AMD/Whatever doesnt' have the balls to kill the x86 instruction set. (And don't get me started on Merced.)
  • The Athlon is an amazing chip, even more so given the need to maintain backwards compatibility with real-mode X86 code and the hack that is MMX. The only performance improvements I really expect to see going forward in X86 architecture are going to be due to process improvement rather than architectural development. MMX and 3DNOW are kludges on the architecture. In light of that, Athlon stands out even more.

    The G4, though, has the advantage of being a lighter-weight chip (fewer transistors needed, fewer instructions, less microcode). As for speed, RISC versus CISC aside, the Motorola/IBM designs have not shown the ability to drive the high clock speeds that Intel and AMD are playing with. Until about a year ago, the two were neck-and-neck, but the X86 chips are now up around 800 MHz while the G4 is just passing 500 now. But given the efficiency of not having to deal with all the microcoded X86 instructions the G4 minimizes the difference in a well-implemented OS.

    Another thing to keep in mind (mentioned in the article) is that the G4 is not strictly designed for desktop computers. PowerPC chips are very popular in the embedded market, where they go into single-board computers, automobiles, and all sorts of dedicated hardware. Sales to Apple alone wouldn't keep a chip family alive. Interestingly, Intel sells a lot of older 386 processors to the embedded market too - the too-cool Blackbery 2-way pagers use a 386 processor among other devices.

    The best thing that PowerPC has going for it IMHO is that Motorola didn't build backwards compatibility with the M68K series processors. They made an architectural clean break - and the few companies that needed compatibility did it through emulation (parts of the MacOS are still in 68K code today). The ample shortcomings of the MacOS tend to cover up what is a first-rate processor family.

    My suspicion as to the 'real' reason Intel has been funding Linux ventures is this: they know that Windows is hopelessly tied to X86, and they are hoping to eventually leave that baggage behind in the IA-64 architecture. Ultimately, X86 will be a drag on clock speeds.

    Sorry to have rambled about here some, but I'm still a bit sleep-deprived from the weekend.

    - -Josh Turiel
  • Oops, I screwed up in the first posting. Here is a corrected one:

    I agree such a test would be interesting. Just a quick correction, though: AltiVec is also SIMD, the Flynn classification which is used as a generic term in the literature.(Flynn's seminal paper is where the terms SIMD, MIMD etc. were coined first) AltiVec and SSE, MMX, 3DNow!, MIPSV, VIS etc. are SIMD implementations.

    Also, Motorola has a bunch of very nice C libraries and modified compilers which take advantage of the new instructions, and I'm sure some other SIMD extensions have similar C libraries; so it's not exclusively accessible from assembly language. What's probably needed is some very smart compilers to choose when to use the SIMD extensions.
    --

    BluetoothCentral.com [bluetoothcentral.com]
    A site for everything Bluetooth. Coming in January 2000.
  • Not to boost your ego any more, but get a grip buddy. Have you seen any other articles that talk about this kind of material? It may be too simple for you, especially since I'm sure that you "know it all" already, but I think this article is a great example of what I would like to see more of on the 'net: some discussion rooted in real technology instead of PR claims and advocate BS. I don't know it all, I'll readily admit, and such an article is very useful for me to see the hows and whys and wherefores. But for you to say that there is "no real information here" is a joke. You're either an elitist prick, or someone who just wants to kick dirt at an informative article.
  • The only reason apple was able to do it is because they didn't have very many macs in use and their systems are so proprietery that whatever they say goes. On the other hand, there is heavy competition in the PC world and there are too many PC's that exist to just suddenly drop support for them. Chances are that any company(AMD, Intel) that just dropped the x86 instruction set would lose a lot of business and nobody would support it. They will always have to maintain compatibility with previous generations and just add new instructions to make it more powerful. I think that it would be great to have a new RISC based chip for the PC (it really wouldn't be a PC then), but it just isn't practical.
  • by Anonymous Coward
    check it out:
    http://www.motorola.com/SPS/PowerPC/AltiVec/
    and click on "progamming examples"

    cheers
  • by MacBoy ( 30701 ) on Monday January 03, 2000 @08:31AM (#1410147)
    To quote from the article:
    Since the K7's FPU handles vector operations, it's not always totally free to do fp ops like the G4's FPU is. But considering that vector and regular fp calculations aren't normally mixed, the K7's fp performance should exceed that of the 7400 under most circumstances...
    The G4's vector unit (AlticVec) is way more complex than the K7's. It can do Floating Point operations - four SP (single precision) or two DP (double prec) in fact. In combination with the FPU of the G4 (which can do one SP or DP FP op), the G4 can do no fewer than five SP FP ops or three DP FP ops per cycle. Any application that does FP ops and is compiled on an AtliVec enabled compiler (such as Codewarrior or Mototrola's) will take advantage of this superior capability. AltiVec's 32, 128 bit-wide vector registers and it's 155 vector instructions make it a formidable number-cruncher.
  • by Paolo ( 87425 ) on Monday January 03, 2000 @08:32AM (#1410148) Homepage
    While it is one of the first truly unbiased and highly technical articles on the K7 and G4 chips (instead of rumors and performance "benchmark" drivel), it does not say that much about the chips in the end. It should have concluded with a stronger statement about the efficiency of the G4 chip.

    The US Govt is often criticized for implementing obscenely expensive solutions to problems when simple ones would have done the job better. This can be applied the the K7 vs G4 question, for it is always better to have efficiency when the performance is the same.

    Rumor has it that Intel is running the new Itanium chips currently with 30 watts of power consumption, over twice that of the G4. If I upgraded my motherboard to the itanium (tm), I would have to get a new case or power supply because of the incredible inefficiency of the chip. That is not novel engineering, but sluggish engineering, something which is not prized in this day in age.

  • I second that. I'm not a CPU engineer or any such person, but I do appreciate the more complex discussion that got away from benchmarks in photoshop (which is what I was expecting to see). The fact that this comparison was technical in form but understandable to me is what I liked the most. I wish some of the more over-the-top advocates out there (on BOTH sides) would take the time to really read an understand the technology behind the CPU's. Where I work there's probably a CPU argument everyday (cool place I know :), but stuff like in this article never gets mentioned. I do have one complaint however. Even if the information is very informative, I do think that it would be good to separate out the general information from the specific information. Still, a good read was had by me.
  • sorry that i keep ranting about alphas, but the (well-written and informative) article mentioned the alpha cpu under its breath, when talking about the ability of both the K7 and the G4 to achieve OOO (out of order) execution.
    This bookkeeping takes special hardware, hardware that both the K7 and MPC7400 have and that a more traditional RISC machine like the older Alpha lacks.
    So is this something Compaq/DEC/whomever is working on, or is it that important? The only drawback I found in the a.t. article is that the desire to achieve OOO was not motivated, but then again, that may be only because I took cpu architecture a few years ago (before OOO became a big deal).
  • by Anonymous Coward
    It is a no brainer that these are the best two chips you can get in a PC today. But average joe 6-pack won't buy either of them because they don't have the name Intel stamped on them. Take your athlon or g4 duct tape an intel logo on it and then maybe the average user will pay attention. What they really need is better marketing.
  • by MacBoy ( 30701 ) on Monday January 03, 2000 @08:42AM (#1410154)
    ...As for speed, RISC versus CISC aside, the Motorola/IBM designs have not shown the ability to drive the high clock speeds that Intel and AMD are playing with. Until about a year ago, the two were neck-and-neck, but the X86 chips are now up around 800 MHz while the G4 is just passing 500 now...

    True, but the gap will lessen (or disappear) in the near future. The G4 has been limited (in clock speed) by it's exceptionally short 4-stage pipeline. Motorola has demonstrated a version of the G4 with a longer 7-stage pipeline that hits much higher clock speeds (~700 MHz range at the demo - higher in production). Each stage is simpler and faster, resulting in the higher clock speed. The K7 already has a very deep pipeline, which is a large factor in its high clock speed.

  • by Pope ( 17780 ) on Monday January 03, 2000 @08:45AM (#1410155)
    is here [macinfo.de]

    It shows power consumption of the major chips in use. Note where the PPC chips are! :)
    Enjoy.


    Pope
  • As for speed, RISC versus CISC aside, the Motorola/IBM designs have not shown the ability to drive the high clock speeds that Intel and AMD are playing with.

    The PowerPC doesn't need the high clock speeds of the Intel/AMD chips. On average, it does about twice as much per clock cycle than the X86 chips do.

    Comparing clock speeds without consideration of clock efficiency is like comparing the version numbers of the various Linux distributions.
  • AFAIK, Alpha simply does not do OOO. It relies entirely on the compiler to organise the instructions into the most efficient possible order. (almost all compilers for all CPU's attempt to do this to some degree BTW). Leaving out the OOO capability enables the CPU to be much simpler. That lets the designers concentrate on achieving high clock rates. But these simple, fast CPU's are heavily dependant on the compiler delivering efficient optimised code in order to achieve high performance.
  • Actually, I don't think this is the best explanation. In the 68K to PPC conversion, Apple said, "Here's a new and faster chip, which because of our software emulator, 99% of your stuff will still run." It was a) the emulator, and b) the just-recompile-and-ship-a-single-fat-binary that allowed them to get away with it. Why this didn't work with the Alpha and x86 emulation is/was the problem illustrated (no market penetration, multiple versions, etc).
    --
  • Our organization uses the PowerPC Microprocessor in cases where we had used other processors in the past where power consumption and other small factors had become an increasingly a problem over time. Also the fact the architecture isn't tied to an OS made things much easier to work with (stable) and thus we could design our systems to do exactly what we wanted to do how we wanted it. We also have the flexibility to change that how and when we want and stick with the same processor and a familiar architecture.

    Granted we don't make Personal Computers but IBM has marketed those processors toward the Linux world.
  • Motorola/IBM designs have not shown the ability to drive the high clock speeds

    I think this is probably not an issue of the Microprocessor design as it is the fabrication capabilities.

    Also, the clock speed is not a valid comparison between two different chips, especially two vastly different chips; what you get per MHz is not the same thing; otherwise you would be able to buy and 800MHz Z80. It's like the truck company that built a 2 stroke pickup: people got freaked by the fact it redlined at 4000 RPM.

  • Confused on several points:

    "The only reason apple was able to do it is because they didn't have very many macs in use"

    There are millions of 68K-based Macs. More to the point, at the time they dumped the 68K instruction set, 100% of existing Macs were based on the 68K line of chips.

    "their systems are so proprietery [sic] that whatever they say goes."

    For what it's worth, this is true. Unfortunately, you go on to say:

    "On the other hand, there is heavy competition in the PC world and there are too many PC's that exist to just suddenly drop support for them."

    First of all, dumping the x86 instruction set from new chips will not 'drop support' from the existing PCs. The existing PCs will remain unchanged. What dropping the x86 set would do is remove backwards compatibility in the new chip. This would be a bold move, and has yet to be attempted, in spite of the 'heavy competition in the PC world.'

    What is the paradox here? Perhaps 'heavy competition' does not automatically equal 'elegant new technology.' Or, perhaps there is not as much 'heavy competition' as PC-owners would like to imagine. They don't call it Wintel for nothing. Also, few people give Apple credit for heavily competing with ALL the PC companies, all the time, all by themselves.

    Which brings us to Linux. Divorced from Microsoft and Intel, what are now ironically called PCs may finally see all kinds of funky new hardware and software technology. Now that IBM is selling PowerPC motherboards, we may soon see G4 and K7 boxes side by side in the computer store--both running Linux. Now that's innovation and freedom of choice!

    Mike van Lammeren
  • by Megane ( 129182 ) on Monday January 03, 2000 @09:23AM (#1410165)
    One important part of the PowerPC architecture which this article fails to mention, is the multiple condition code registers of the PPC. (These date back to the older Power architecture, BTW.)

    Unlike all the other similar features mentioned in the article, these can not be retrofitted into the K7, because it is limited to the x86 instruction set, which does not have this concept.

    Basically, any instruction which needs to check the result of an operation (such as a compare, or overflow from an arithmetic operation) has to use condition codes. But in a pipelined processor, the result of the operation usually has to wait until the instruction has finished going through the pipeline. Rather than wait this long to decide what to prefetch, branch prediction tries to guess whether or not the branch will be taken. The predictions are usually right, but not always. What if there is more than one such comparison close together, particularly if the result is not being used directly for a branch, but for a boolean expression?

    What the PPC does is have multiple (7?) condition code registers. When an operation such as a compare is done, you select a condition code register to receive that result. In the same way that code can be optimized for RISC by interleaving multiple threads of operations such that the result of an operation isn't used until three or four instructions later, the condition code register usage can also be interleaved.

    With out-of-order execution (OOO), the CPU automatically rearranges instructions to achieve this interleaved usage of registers. And thusly, the PPC will gain this advanatage with condition code register usage as well.
  • Their systems are so proprietery that whatever they say goes.

    It's true Apple's systems are proprietary, but so what? There is no reason that AMD could not do the same thing. Suppose that AMD and Microsoft co-developed a new ISA, then developed a software emulator for the old x86 architecture, and Microsoft began writing Windows to run on both architectures. I see no reason that this could not happen, except that Microsoft may not have the ability or desire to do it. (remember NT on Alpha?) But that's a failing on the part of Microsoft, not a limitation of non-proprietary hardware.

    In fact, if Microsoft had stuck to its original plan to develop NT for multiple platforms, then this could have been done quite easily once they started selling NT as a consumer product. I don't see what proprietary hardware has to do with it. Apple just has more guts than the Wintel world does.
  • stop being an arrogant prick. MPF im presuming is referencing the microprocessor forum : http://www.mdronline.com/mpf/ its a seminar like event where mpu papers and stuff is presented.
  • Moto didnt have a choice in whether to implement 68K opcodes in PPC, IBM owns PowerPC, Motorola is just licensing it. Trying to add an opcode to the PPC is a real nightmare because you have to get IBM's approval. Moto went to PPC beacuse Apple demanded it, not because it was better than the Moto RISC, the MC88110. Actually, clock for clock the '110 was faster than the 601, and it had a better bus design than the Power architecture, so they put the MC88110 bus on a Power core, and that became the PPC601. The '110 also had a graphics Execution unit as well as FP and Integer units. Of course, its internal design and transistor technology limited it to about 65MHz and it was a few years late, so Apple wanted an architecture with some industry backing, IBM. NeXT was well under way in designing a dual '110 machine, I wonder what ever happened to it. One CPU did the color Display Postscript, and the other ran the NextStep OS. I'm sure Jobs was having a bit of DejaVu when Moto couldnt deliver the 500MHz G4's on time/quantity.
  • Actually, the 21264 is an O-O-O machine. Aphas had always been considered "speed demons" because DEC did everything to push the clock, while HP, for example, pushed IPC with O-O-O, leading to the "brainiac" labelling of the PA-RISC.

    The 21264, therefore, can be safely called a "demoniac."

    Future Alphas will start to make use of Simultaneous MultiThreading to run threads at the CPU level. I don't know what form this will take, initially.

    --

  • Also the G4+ will have the 2 Altivec units, 2 FPU units and 4 Integer units each with 32 dedicated registers. Plus it will use the 256bit data paths, integrate up to 1MB level 2 cache on to the die and support up to 4MB level 3 cache. Thats why the PPC will eventullaly pull away they have the space to do more.
  • Hmmm... Let's see if I can make this a little more clear.

    "There are millions of 68K-based Macs. More to the point, at the time they dumped the 68K instruction set, 100% of existing Macs were based on the 68K line of chips."
    And how many x86 PC's are there?

    "First of all, dumping the x86 instruction set from new chips will not 'drop [software] support' from the existing PCs."
    But will new programs run on the old systems? Will old programs run on new systems? Will there be linux software for it, such as a compiler, or a kernel? That's what I meant in 'support', or rather software support.

    "This would be a bold move, and has yet to be attempted, in spite of the 'heavy competition in the PC world.'"
    The idea is that there are so many PC systems that they will not just change to this new architecture because of existing applications that run with these systems. The 'competition' would just take over the market. Look at intel. The reason that they were so successful in the beginning is because they were associated with compatibility. The reason that PC's (as in x86 architectures) were successful is because they were more open and led to more competition. Hmmm.... Maybe this so called 'competition' does play as an incentive not to drop 'support' for the existing technologies(the x86 architecture), as in not making chips that 'support' the old instruction set. There would be no software to run on it.

    "What is the paradox here? Perhaps 'heavy competition' does not automatically equal 'elegant new technology.' Or, perhaps there is not as much 'heavy competition' as PC-owners would like to imagine." How many companies make G4 processors? How many companies make mother boards for these? There only appears to be one company. Naturally there must be competition if there's only one company...

    "few people give Apple credit for heavily competing with ALL the PC companies, all the time, all by themselves."
    The only reason they are competing is because they have pretty computers. I really don't see how they got as far as they have, as they sell over-priced, under-powered machines. How much is the high end G4? How much are Dell's even higher end workstations? There doesn't appear to be much innovation, or any competition to reason any innovation, there...

    "Which brings us to Linux. Divorced from Microsoft and Intel, what are now ironically called PCs may finally see all kinds of funky new hardware and software technology. Now that IBM is selling PowerPC motherboards, we may soon see G4 and K7 boxes side by side in the computer store--both running Linux. Now that's innovation and freedom of choice! "
    That would definitely be cool!
  • Maybe this so called 'competition' does play as an incentive not to drop 'support' for the existing technologies(the x86 architecture), as in not making chips that 'support' the old instruction set. There would be no software to run on it.

    This is the point of the original post--the support could be provided in software, saving space on the silicon and allowing native apps to run faster and cooler. Many apps don't need the kind of horsepower that modern chips have, and the few that do can be released as "fat" apps like was done on the Mac. The whole point is that there *were* apps to run on the Power Macs-- in fact almost every 68k app ran on the new Macs, just not as fast as they otherwise would.

    The only reason they are competing is because they have pretty computers. I really don't see how they got as far as they have, as they sell over-priced, under-powered machines.

    Yes, they make "pretty computers," and yes, it is kind of silly. But that is far from the only reason. Linux/Solaris/*BSD is simply not an option for most users. And although Windoze is improving, it is still uglier, more bloated, less intuitive, and less integrated with the hardware than the MacOS. Style matters. Ease-of-use matters. Reliability matters. And MacOS is still the best choice in all three areas.

    You're probably a Linux geek, and that's fine. But let me point out that the Mac is not designed for you. The fact that you are not impressed with it does not make it an inferiour product. This engineer's myopia of judging a product based solely on raw performance and price is not helpful. There's more to a computer than it's MHz rating and its hard drive size.
  • They include the L2 cache on some chips, but not others and don't bother to mention size if the cache. Just look at the three different 200MHz PPros that each consume vayring amounts of power since they have 256KB, 512KB, and 1MB caches.
    This would be a good graph if your main concern is raw power consumption of a normal processor purchase (I'm sure that you could get cacheless Athlons if you buy enough).
  • Guys, at least read the article before you post.. This wasn't a "this cpu runs quake 3 faster then this one" type article. It wasn't about the platforms that the procs run, it was about the technology of them, and how they were implemented. It was a very good article, and I suggest you read it.
  • Um, you're smoking what?

    The only reason apple was able to do it is because they didn't have very many macs in use...

    OK, that's just stupid.

    ...and their systems are so proprietery that whatever they say goes.

    This is one of Apple's biggest advantages when it comes to industry-leading innovation, although it is a bit annoying when you're thinking of making a purchase. Fortunately they're dumping most of their proprietary (or just different) hardware in favor of more commonly accepted standards (HD-15 instead of DB-15 monitor connectors, USB instead of ADB, no more 8-pin mini-DIN serial ports...).

    On the other hand, there is heavy competition in the PC world and there are too many PC's that exist to just suddenly drop support for them.

    Once again, you're on crack.

    Chances are that any company(AMD, Intel) that just dropped the x86 instruction set would lose a lot of business and nobody would support it. They will always have to maintain compatibility with previous generations and just add new instructions to make it more powerful.

    This is the view that is currently held, as is evidenced by what products are out there, but with the rise of Linux (and, to a lesser extent, other open-source operating systems), processor-dependence is becoming less important, and chipmakers know this. And remember, AMD has also made a bold move, and been successful - they released a processor with no motherboard support whatsoever, and lo and behold, we have Athlon motherboards now. If AMD says they're making a new processor that uses a new instruction set, you can bet that Linux will support it before too long.

    I think that it would be great to have a new RISC based chip for the PC (it really wouldn't be a PC then), but it just isn't practical.

    Um, PC means Personal Computer. If you change nothing but the processor, you also have to change the software (new Linux distribution, recompile, woohoo, not that hard, just like all the other architectures Linux runs on [PowerPC, Alpha, Sparc...]).
  • What would *really* be a nice boost is if it were possible to access the RISC component of the Athlon (ie. bypass the x86 decoder).

    Basically, that would allow one to run legacy apps by allowing the Athlon to operate as a smokin' fast x86... and run new apps by allowing the Athlon to operate as a smokin' RISC machine.

    And if this could be done in a multitasked environment, so much the better -- running a legacy app *and* a new app simultaneously.

    Too much to hope for, I'm sure!
  • Yes and back to the comment "comparing these two processors just isin't as exciting...because they don't run the same software" - no but there are metrics you can measure - ie float and int performance and someone has already done the work for you in SpecInt95 SpecFlt benchmarks.

    Now, I am too tired from the weekend (partying Friday night and working early Saturday morning - y2k testing) to chase the links and give you the exact comparisons between the fastest of each chip or MHZ comparisons but someone has already done the testing and they can be easily found with a web search.

    --cheers & happy new year!
    Dan
  • That's probably the *last* thing I look at when choosing a new CPU. Price and performance come first, of course.

    Why are people trumpeting that the G4 only uses 4 or 5 watts, or whatever? Who cares? The Alpha CPUs, widely regarded as the fastest money can buy, use something like 100 watts, but I'm sure the people that buy them really don't care about that either.

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • Yep, along with a whole slew of other different factors that make the truly geeky give a damn about any of this crap. :)

    Power consumption is *IMPORTANT* if you're making laptops or embedded systems, correct?

    Pope
  • For the same reason you don't want a car that only gets 5 miles per gallon.
  • And thanks to the Imac, we now have orange apples...
  • The StrongARM doesn't have an FP unit...
    John
  • ...until you try to cool the mutha.
  • Well, ok, in most areas Linux beats Mac OS in the reliability department. But both beat Windows, and Linux is so far behind in the other two categories that it's not even an option for most users.
  • I'm sorry, but I can't even begin to imagine such gross ignorance. It boggles my mind. Hitler couldn't have done half of brainwashing to accomplish this level of ignorance in his wildest dreams. THIS is scary.
  • This could be the worst instance of flamebait I have ever seen, then again they could just be a total psychopath... yeah, I am going to have to agree with the psychopath theory.

    PS: I don't like the G4's either, but you seem to have had just a little too much to drink, or not enough to think that.

  • I would say even Windows 9x is more stable than the MAC OS, granted I haven't used it since 8.0 but damn was the MAC OS BUGGY as hell.

    7.6 and earlier was buggy as hell, and if you're running wacky programs Windows might handle it better. But I'm currently running Mac OS 9, and it only crashes about every two weeks. I think it depends a lot on system configuration. A well configured system on any platform will be more stable than a poorly configured one. But OS 9 is extremely stable if you don't abuse it much.

    One advantage that increases Mac OS stability is the smaller number of machine types and more tightly integrated hardware and software. It might be that once you install all the right pieces Windows is as reliable, but getting it there is a pain. And Windows isn't very helpful. Apple has worked hard to ensure that every version of their OS support every recent machine. Their latest (OS 9) supports any machine with a PPC, which will take you back 6 years.

    Anyhow, system stability has improved dramatically since system 7, and at this point system configuration is more important than OS features. And it looks like Mac OS X will be out long before Microsoft's NT consumer arrives, so that'll widen Apple's lead in this area.
  • I didn't mean NT. NT is indeed more stable than Mac OS, but it is not a consumer product. It's too pricey, and is still too complex for Joe Sixpack to set up and use.
  • Does this guy know anything about architecture?

    In a word, yes. He knows an awful lot. If you don't know what "Post-RISC" is talking about, why don't you read his article on RISC vs CISC like he suggested. Here, I'll make it easy for you: RISC vs. CISC: the Post-RISC Era [arstechnica.com]

    Of course, it also looks like you didn't even bother to fully read this article. Hannibal is hardly anti-mac, anti G4, or any of that. He concludes that he prefers the G4 over the Athlon (and the Alpha over both of them). Give me a break...


    --------
  • Here Here. I like this response so much I think I'll save it for later use.
  • Well, ok, in most areas Linux beats Mac OS in the reliability department. But both beat Windows, and Linux is so far behind in the other two categories that it's not even an option for most users.

    Where I work I admin a MacOS X Server machine that provides AppleShare (and Samba) file services. There is a web interface that lets you control the AppleShare file server remotely. Once of the things it does is give you a table view of all the current connections and how long they've been connected. Below is a list of the current active connections connect times. What this means, of course, is that each of these client Macs has been running at least as long as they've been connected to the server. I was rather surprised at some of these connect times:


    Total Connected Users: 18
    Connected For...(day hr:min)
    42 06:22
    31 08:47
    31 08:42
    28 07:27
    27 08:46
    24 00:42
    21 06:59
    19 04:39
    18 05:26
    17 05:45
    17 03:08
    14 03:23
    13 06:41
    13 02:37
    12 02:14
    7 08:45
    6 08:20
    2 05:27
  • In general making more stages also decreases the amount of work done per clock cycle. Will Motorola end up with a processor that is truly faster or just higher Mhz? All other things being equal, of course.
  • Wow...the PowerPC chips use a lot less power than the Intel chips. No wonder Mac laptop battery life is so much...oh, wait...Mac laptop battery life is the same as comparable PC laptop battery life.

    So, remind me why I'm supposed to care about CPU power consumption?

  • Can we say "Kryotech"? I think thats cool enough.

    And how much additional power does the Kryotech cooling unit use?

  • It's not the number of courses on computer architecture you've taken that counts. It's the number that you've passed. Come back when that is non-zero.
  • Wow...the PowerPC chips use a lot less power than the Intel chips. No wonder Mac laptop battery life is so much...oh, wait...Mac laptop battery life is the same as comparable PC laptop battery life.

    Do they have the same battery life? According to Apple's web site, the iBook runs up to 6 hours on a single battery, the PowerBook runs up to 5 on a single battery, and you can put 2 batteries in it (one would go in the spot normally used for the CDROM/DVD drive.) I went to Dell's web site and for two of the three laptops available, the times were 2.5 hours/battery and the third was 3.5-4 hours/battery. I didn't take a lot of time to look, but I assume that you can put two batteries in if you forgo the CDROM/DVD drive. So, you were saying?

  • Perhaps because low power means one might feasibly use said chip in a laptop while that is not an option for its power-slurping brethren?

  • From the article, I gathered that the Athlon has a better core but is limited by the x86 frontend. My question is - to what end is this frontend changeable? Could AMD theoretically place a different frontend on the Athlon to make it a PPC chip?

    And, is this the type of idea that the Crusoe processor is going to have - i.e. modulating the frontend on a chip so it can run many architectures?

    The mind ponders.
  • What would *really* be a nice boost is if it were possible to access the RISC component of the Athlon (ie. bypass the x86 decoder).
    Basically, that would allow one to run legacy apps by allowing the Athlon to operate as a smokin' fast x86... and run new apps by allowing the Athlon to operate as a smokin' RISC machine.


    Why does evryone assume that x86 is somehow bad? Has the Apple "RISC is god" propoganda gotten to you? This issue has been discussed on Slashdot before. x86 is fine. Don't fix what isn't broken.

    Besides, the Athlon goes 700 Mhz. If that's not fast enough, I suggest buying a Cray :-)
    ---
  • I haven't seen any comments on the fact that the G4 can easily be modified to work in mobile devices while the Athlon runs rather......warm.
  • The ample shortcomings of the MacOS tend to cover up what is a first-rate processor family.
    Does LinuxPPC do a better job of running "nativly" on the G4 processors? Or, is linux so rooted in x86 that it's hampered on a G4 chip?
  • by Anonymous Coward
    First off, they're not the same. Apple claims 5-6 hours/battery with their portables, and I've usually found their estimates to be conservative. Most Intel boxen claim 2-4 hours.

    Secondly, Intel boxen use stripped-down "mobile" cpus. Last I heard, a "mobile" p2 is about a quarter the speed of a similarly-clocked "normal" p2.

    Whereas Apple gets to use exactly the same chips in both desktops and laptops. They consume less power not only in direct dissipation, but in not needing to use a fan to deal with heat, and they turn out to be dramatically faster.

    Apple has still never shipped a machine that needs a cpu fan. A heat sink is common, but they're often as small as a quarter on current laptops.

  • From the rumors I heard, Intel engineers would be selling their grandmothers to only be using 30 watts. I'll look for the hard numbers, but I've heard _70_ when they run the things at full speed.
  • I just had a thought here. Bear with me.
    With Alpha/NT Digital bore the costs of creating and maintaining the Alpha versions of the OS, tools and applications.

    What if AMD did the same thing. Imagine a version of Win2K that was compiled to native K7 code. AMD could re-use the FX!32 stuff that Compaq had put into the kernel. Then they could create a K7-lite that didn't have the x86 baggage.
    That would be the el-cheapo version. Then you've got the normal K7 that can still run x86 apps.

    It would be an interesting experiment and probably scare the crap out of Intel.

    Of course, this could also be a crack-pipe fantasy.

    P.S. NT on Alpha died when Compaq decided they were pulling the plug. Since Microsoft doesn't sell any hardware, why should they support an architecture that the manufacturer doesn't support anymore?
  • Comparing clock speeds without consideration of clock efficiency is like comparing the version
    numbers of the various Linux distributions.


    Plus, a lower clock speed has much less design issues than a higher clock speed. Lower clock speed chips can use cheaper fab processes, the motherboard can be laid out without as much concern to cross-coupling/transmission line problems, clock skew (very important), and a host of other problems inherent with high speed/microwave design.
  • I think your analogy is a little flawed, since we should only be comparing the chips themselves (like the engine of the car).

  • by Anonymous Coward
    Ok, so the G4 is a nice chip. Let's say I'm willing to give it a try. Does there exist a commodity priced G4 MB in industry standard ATX form factor that can be put into a commodity industry standard case and hooked up to a commodity industry standard power supply? Oh, and the motherboard must accept industry standard memory DIMMS, and support the PCI bus.

    Ahh, there's the rub. Or should we say ``the fly in the ointment''? If a screw-driver shop can't build it, then my interest in the G4 remains but academic. I really don't have a use for ``boutique'' computers.

  • I just bought a Athlon 700mhz, but the stable series of debian wont install or even boot with an athlon. The unstable series works, but that involves ALOT of work to get shit working. Unless anyone has any resources (how-tos) or information on how they got debian to instlal on there athlons. I would appreciate it. thanks. PS. Sorry for the improper grammer, I cant see what I am typing. I am in windows right now cause I cant get debian installed, and netscape poped up a illegal operation error, but it still allows you to surf the web and write this, it just sorta sticks a error box in your face, s.
  • I can't do better. Given the range of talent represented here, I wouldn't be surprised to find someone who could, but that's irrelevant.

    The point is that there is someone out there who can do better. Not necessarily the person posting.
  • Use a distribution whose boot images use a kernel that was released AFTER the athlon release. linux kernel needed some K7 related patches to work right. You can try latest RedHat or SuSE or
    Debian 2.2 (potato) if you insist on Debian
  • If x86 is fine, why is it that Motorola and IBM can pretty much keep par with Intel and AMD despite vastly smaller R&D budgets? Read the article, it talks about all the hoops the K7 has to jump through to support the aging x86 instruction set. This set of instructions was never designed for a high-performance processor, and lends itself extremely poorly to such things as pipelining. The x86 instruction set does work, but there are better things out there. Its only virtue is that it's what everybody else uses. Some virtue.

    Imagine if AMD could take all that effort it spent on making that aging x86 instruction set work with their spiffy new processor and put it into making the processor fast instead? Rather than a 700MHz x86 processor, you'd probably have a 1GHz or higher RISC processor that would make the current K7 and G4 look like snails.
  • The Power Mac 5500/225, 5500/250, and it's 6500 counterparts use 486 type fans screwed to the heatsink. They make a lovely racket when their bushings wear out too. Incidentally, Apple laptops aren't fan free either. While it is true they don't employ CPU fans, the G3 Series Powerbooks do have a small recirculating case fan turned sideways in the rear of the units. Some of the other Mac techs could probably come up with more examples.
  • Well, you hear rumors about AMD's 64-bit "K8" chip, which would have a unique ISA. However, the only OS that would run on this chip would be Linux, unless AMD forked Microsoft some cash to port NT. Since the Linux market isn't nearly big enough to support a unique ISA aimed at the low-end, the K8 talk sounds more like anti-Intel marketing FUD than anything for real.

    OS support is the biggest reason the Apple-style emulator approach wouldn't work in the PC world. DOS still does matter, as does OS/2, NetWare, BeOS, and old versions of Windows. Since it's unlikely that a emulator would ever be added to these OSes, you'd be cutting yourself off from the 'legacy' market, and essentially end up like NT/Alpha.
    --
  • People seem to have forgotten the point of RISC'ing in the first place. RISC's main point was to give the final control of the system's pipeline speed from the hardware to the compiler.

    In true RISC processors (MIPS and SPARC as good examples) the compiler can and does produce an efficient pipeline w/o stalls of any sort. Which means that OOO is wasted on such a chip.
    The problem comes in with inefficient compilers and poor resource management. To solve this, hardware dvelopers introduced things like OOO and thread forecasting to allow bad compiling alg's to retain their top speed. But this leads to a bad trend which is quite apparent in the x86 compiling community, allowing the hardware to schedule processes in the end does not encourage compiler developers to make better and better compilers, instead allowing them to "scoot by" without ever learning the art of mastering the pipeline.

    There is a new architecture arriving shortly, called VLIW, which is at it's heart nothing but taking the RISC concept one step furthur, forcing the compiler to take it's motley crew of commands and set them up to pipeline efficiently. There are more differences than this, I know, but at it's heart this is pretty much what it is. A good compiler will produce better code than any OOO engine out there can ever manage. Why? Because the programmer knows what he wants, the processor does not. No matter how smart, how many transistors you throw at a problem, a smart programmer with a smart compiler will always turn it on it's head.
  • This is one of the best non-technical articles I've ever seen on modern CPU architecture. The only real lack is a discussion of "retirement", or what happens after the execution units finish and the results have to be stuffed back into the programmer-visible state. This is where all the ugly cases get resolved - mispredicted branches, FPU exceptions, and similar dirty laundry. Much of the design effort is in this part of the system, and it deserved a mention. x86 emulation, in particular, places tough demands on the retirement unit. On an x86, you can store into the instruction you're about to execute. Think about what that means on a superscalar out-of-order execution machine.

    Regarding some other comments:

    • People seem to have forgotten the point of RISC'ing in the first place. RISC's main point was to give the final control of the system's pipeline speed from the hardware to the compiler. The problem with that idea was that software lives longer than hardware. If you have to get new versions of the software to effectively use a new CPU, people won't buy the new CPU. MIPS machines are notorious for needing many separate binaries for different CPU versions, which hurt MIPS competitively.
    • But average joe 6-pack won't buy either of them because they don't have the name Intel stamped on them. AMD has been doing very well with Joe Sixpack. They created the sub-$600 PC market with their cheap CPUs, and took the low end away from Intel.
  • The vector unit is state-of-the-art silicon that does way more way faster than your typical SIMD logic. It crunches biiig numbers in vector math, and can be repurposed to do simpler things, like FP math. (There seems to be some confusion whether or not it can do DP or only SP...either way, it -hauls-.)

    More to the point, the PowerPC post-RISC strategy is a lot smarter than Intel/AMD's. Mix'n'match logic to create the perfect balance of processor units, and slap four cores on a single slab of silicon. This is where PowerPC is going to be -at- next year.

    The problem has been Motorola lagging behind the group. IBM has signed on to crank out the next rev of the G4, and clock speeds are due for a major boost: they demoed 1GHZ tech on the PowerPC platform 2 years ago, and are cooking up 9ghz technology in deep R&D as we speak. Even with the MHZ gap, the design of the G4 is smart enough to keep up with the "big brutes" crunching x86, and the POWER4 (IBM's 64 bit PowerPC implementation for it's RS/6000 Unix and AS/400 "baby mainframe" systems.) will leave IA-64 in the dust before it even gets out of the gate.

    Smart money is on the original RISC R&D houses for the edge in the post-RISC sweepstakes: wait'll you get a load of what the Alpha has in store...

    SoupIsGood Food
  • It is true that the PPC has condition code registers, and that having more than one is better than just having a single CC register. But...

    The thing that one should note is that, from a CPU design point, condition code registers are often a Bad Thing(tm) - as are instruction "side effects" in general (condition codes being one example). Particularly with superscalar, out-of-order CPUs. They add complexity to the processor's attempts to re-order instructions on the fly. This is in part because the CC regs tend to create additional dependencies between instructions, and often the CC register(s) become a bottleneck. Also, when the CPU tries to reorder instructions, it has to worry not only about whether the reordering will clobber a needed result, but also whether a moved instruction's side effects will do something unintended (ie the moved instruction changes the condition code, and so may not be able to be moved without upsetting the code semantics). These factors tend to constrain the CPU's out-of-order execution unnecessarily, leading to less optimal performance.

    Most processors try to avoid CC regs, since they are a centralized resource that can easily become a bottleneck when multiple compares come close together. For example, MIPS (as most RISC CPUs) doesn't use CC regs; if you want to check a condition, there is a compare instruction which will set the value of an integer register. Since said integer register can be any of the many integer regs, rather than a small set of particular registers, there generally aren't the bottlenecking problems like with condition codes. Condition codes are really a CISC-ish thing anyway, and in general RISC processors avoid them.

    Adding additional CC registers helps somewhat with loosening that bottleneck, but is really a bit of a kludge around a nasty instruction set artifact in PPC. Not exactly something you'd want to "retrofit" onto an x86 CPU (which has more than its share of nasty instruction set issues already).

  • by Anonymous Coward
    I thought its a shame that its stuck pretending to be something its not, and suffering performance as a consequence. Does AMD (or anyone else) have any plans of having native software for this processor?

    I'm not saying x86 architecture is all that bad, but I bet a version of something like Linux optimized to run natively on the Athelon would outperform an x86 version.

    Its x86 compatibility, when needed, is definately a Good Thing(tm) at present - for emulating/running those lesser-operating systems. Can your G4 run windows like a 700 Mhz Pentium? ...well alot of you might not want to anyway but many people have the need to run a windows program once in a while...and that sells alot more processors.

    You'd think that AMD could get a better hold of the 32 bit market by allowing their own technology become a new standard. After a while, they could have chips (maybe marketed for servers at first) with no x86 emulation at all. It wouldn't be needed once their architecture established itself.

    Maybe I'm dreaming but I'm just looking for something better on a small budget...and backwards compatibility is nice every once in a while. Is this even possible with the current Athelon?

    ~J
  • The gap will never disappear. Intel and AMD are in a price/speed war that will never have a victor.

    Eventually each company will own half the earth and they will fight wars with huge robots ala Mechwarrior. When that happens, sign me up for the cause.
  • If we're discussing the Win9x vs MacOS "reliability' issue, in my experience Windows has actually been pretty stable (uptimes easily a few weeks, which is as long as I've gone without booting to whatever other OS I may have installed at the moment...typically NT or Linux). Well, I should say Win95 OSR2 or Win98 are pretty stable...the original Win95 was so damn buggy it was ridiculous. Whenever I've used MacOS (most recently, running 8.6) I've had lots of stability problems, to the point that I swear Macs just hate me :-p

    But then, there is the configuration factor you mentioned at work here too. My Win98 machine is pretty solid, with not too many modifications from standard setup. The Macs I use (for some of my music classes at school) may or may not be set up all that well, as I have no idea of the competence, or lack thereof, of the Mac admins.

    I personally wonder how much of the grief that gets blamed on Windows is due to Windows, and how much due to poor quality hardware. On machines with quality components, I've rarely had any sort of troubles with Windows. Things start getting shaky once you start with some of the really cheap hardware, though, in my experience. I do have to give M$ credit (*gasp*) in the sheer volume of supported hardware they have, and yet maintain a reasonable level of stability.

    Apple does have the advantage of a relatively small set of hardware to support. So I would hope they would be able to get a decent OS on it :-) OS X does seem like it might actually be pretty good, though I haven't had the chance to use it a whole lot.
  • Could AMD theoretically place a different frontend on the Athlon to make it a PPC chip?

    Presumably so. The decoder might actually be simpler than the x86 one. Doing so would likely require some modifications to the back end also (even though the CPU works in internal ops, it has to write results to memory of only whole x86 -or in our theoretical version, PPC - instructions, not these ops. After all, what does having executed half an instruction mean from the program's point of view?) Though, if you did this then the Athlon is still essentially hindered by having to do a PPC->internal op translation. Plus, the choice of what to put in your core depends on what sort of code you expect to be running. It could be that PPC code has different characteristics than x86, and so the core should be modified to be more efficient at running PPC...you can see this could end up becoming more of an entire redesign rather than just "slap-a-different-front-end-on-it". But if you don't care about efficiency quite so much, the modifications to run PPC might not be all *that* terrible.

    And, is this the type of idea that the Crusoe processor is going to have - i.e. modulating the frontend on a chip so it can run many architectures?

    Essentially. The difference (AFAIK, as Transmeta hasn't officially said anything yet) with Crusoe is that it consists of a hardware part and a software part. The software part is responsible for decoding the instructions and telling the hardware what to do. The presumed benefit is that since the decoding is in software, it can be programmed to handle any instruction set pretty easily; and that this means you don't have to spend tons of transistors on complex decode logic like in the Athlon. So Crusoe should be less expensive to build.

    However, software is normally slower than hardware. Transmeta's bet is probably that the savings in the decode logic will allow them to make the chip faster, offsetting the performance penalty of doing software-decode. Also, not having all that decode hardware should decrease the power consumption of Crusoe as compared with the Athlon and similar chips. Transmeta has hinted that Crusoe will be aimed at the mobile computing market, so this makes some sense.
  • >What they really need is better marketing. Youre right, where has Apple been? I have seen nary an ad or billboard for Apple computer in years! Boy, they really dropped the ball, marketing-wise.
  • once it's vendor decided to put the boot in on it OS/2 was doomed, which is a shame because its the best X86 OS
  • From the rumors I heard, Intel engineers would be selling their grandmothers to only be using 30 watts. I'll look for the hard numbers, but I've heard _70_ when they run the things at full speed.

    And a G3/400 laptop--the whole thing--only has a 45W power adapter. That'll run the hard drive, the DVD-ROM drive, the processor, the screen, the screen backlighting, the speakers, any external bus-powered USB devices...and charge the battery. All at once.

    Oh, and it'll also let you slow the processor down to extend battery life; you know, like that amazing new feature Intel's announced they'll have Real Soon Now?

    Admittedly, that's the G3 instead of the G4, but even if the G4 itself is using twice as much power as the G3, you'd need what, a 60W adapter?

  • If x86 is fine, why is it that Motorola and IBM can pretty much keep par with Intel and AMD despite vastly smaller R&D budgets?

    IIRC, some people were talking about why the G4 is 450Mhz (stable), whereas you can buy 1Ghz (stable, kryotech) K7s. This is more than 2x the Mhz rating. You can also buy 700Mhz K7s without Kryotech stuff, which is 1.5x the Mhz rating..
    On par? No....

    The designs of the G4 and the K7 chip are completely different. The K7 is like the Saturn V, whereas the G4 is like the concorde. They use different fuels and different speeds, but are both fast enough to get me from point A to point B faster than I can appreciate. So the comment that "Rather than a 700MHz x86 processor, you'd probably have a 1GHz or higher RISC processor that would make the current K7 and G4 look like snails." makes no sense, as to me the G4 (at its paltry 450Mhz) looks really damned fast. I can't even concieve of how fast 1Ghz would be. (Besides, how do you know that removing the x86 translator unit would speed anything up? Where's your EE Phd?)

    Here are my points:
    1) x86 works fine. I have plently of working knowledge of how to program in asm for this instruction set, and have plenty of proven working software for it (think Linux). The "flaws" you talk about are the same ones the RISC community rolled out when Intel had 200Mhz PPros out for more than 6 months before releasing a new CPU design. ISA zealotry annoys me, and doesn't help your possibly legitimate case at all.

    2) The x86 is currently really lots faster, even if it's still too fast for me to notice (except in RC5 rate). So why strive to go even faster, faster, faster, when things are already faster, faster and getting faster (Moore's law).

    3) An all new RISC design (like the implementation in the K7, K6-*, PPro, PII, PIII, Celeron cores) would not have any software support comming out the doors. The reason they have these micro implementations is to allow them to add a layer of complexity to the chips, and make them perform. They change the internals every generation, using different micro RISC cores. Once they sat down and used one for their flagship chip, they'd be stuck with it and lose the flexibility that the cores give in the first place. x86 is a nice general instruction set with instructions for whatever you need, which allows them to emulate it in any way they want (think Transmeta).

    4) Have you noticed how the Sparc32, Sparc64, m68k, and a few other branches of the Linux kernel are not really well supported? It would take time even for Linux to come to bear on this new architecture. I'd rather have 1Ghz Linux now, rather than a "possible" 1Ghz Linux on some new architecture.

    5) AMD, Intel, et all, have an investment of years in the x86 chip business. It's what makes them their tons and tons of money. Why would they throw away the backwards compatibility that gives them oodles of dollars, just to become another bit player in the RISC business (which isn't worth nearly as much)?

    Anyways, I'm ranting a bit because you are acting just a bit like a Zealot. I praise you for being able to look ahead, but you seem to have a bit of a problem looking to now. AMD wants x86 dollars, and they are getting them.
    ---
  • Do not confuse effectiveness and efficency.

  • My point was that that there's a very small market for a low-end PC that can't run 'legacy' or old OSes. If you wanted such a thing, you might as well just buy an Alpha.
    --
  • Something about as technical as the K6/7400 would be perfect. Thanks!
  • I'd also worry about die size. Power dissipation is really only an issue in the portable market, which isn't that big (right now).

    But die size will affect yield and price. I don't know where MOT and INTC are relative to each other re: process efficiency, but as they adopt smaller geometries and funky processes die size might become a gating factor to managing yield.

    As the margins on the rest of the computer get driven into commoditized oblivion the price of the CPU will rise in relative importance.

    (of course scale matters, too, but MOT should be able to keep the volumes up w/the embedded market)
  • Unfortunately, Itanium doesn't seem to be improving on the power-consumption issue. According to an article on MacOS Rumors [macosrumors.com], with a link to an article at The Register [theregister.co.uk], Itanium has been stuck at slow speeds (400 MHz tops so far), and has been consuming massives amounts of power (30 watts at 350 MHz). I'm sure they'll improve on that before release, but it doesn't look very promising...

    -Joe

  • Game, set, & match to lamz for the above post. As this has later on degenerated into a Mac vs. MS free-for-all (I love 'em) I should state that I grew up using GUIs with Windows, and yes I run IRIX and OS 9 networked at home, and know enough UNIX commands to get myself into real trouble. In response to the post that says certain varieties of Windows are more stable than the Mac OS, my experience has been the same, however, with the cardinal Windows caveat that produces such stability: No one may touch the PC, no one may install software on the PC, no shareware is worth the trouble of repairing the PC, no new hardware other than USB may be added to the PC. This is clearly not a workable arrangement in a everchanging production environment where I work with hundreds of image files (thank you G4 chip), capturing audio and analog and DV clips and stills, and cannot afford more downtime than it takes to restart a cranky Mac. I've seen many PCs go down and not come back up in a limited budget production where a week or more was lost to the PC. From my experience I would never go into a professional production using Windows 9x or NT, just as I'd never go into production using a Mac without ample backup. Our org has at least 1,000 users, most of whom use PCs as no choice is hinted at. At our last tech working group the IT head complained that Macs were requiring some outsourcing support costs (as no IT staff were hired with Mac competency). I pointed out that her Y2K solution of replacing every legacy PC in the org. to combat Y2K was the source of our budgetary waste, which had dwarfed the costs required for Mac maintenance by a factor of several decimal places. A motion to continue Mac support subsequently passed without dissent. I've got a million of 'em (such stories), and would close by saying that to state which OS or chip is "superior" without first defining what the work and workload will be for such machines is pointless. ps I think the Athlon is the first ray of hope I've seen in the PC world in years.
  • I think you misunderstood what I said. Compaq didn't pull the plug on Alpha, only on Alpha/NT.

    Compaq provided the funds and engineers to port MS products to Alpha/NT.

    All MS had to do to support Alpha/NT was to continue to ship the binaries on the CDs. It was Digital/Compaq that was doing all the work. Don't forget. Every NT CD that went out the door already had the Alpha binaries on it. That includes all the Back Office stuff, IE and service packs.
  • by HeghmoH ( 13204 ) on Tuesday January 04, 2000 @02:38PM (#1410296) Homepage Journal
    I hold no claim that removing the x86 unit from the K7 would make it faster. However, the money that was spent on that unit could have then been channeled into something else, say making the chip faster. I have no problem with AMD making x86 processors, they can do whatever the heck they want. I do have a problem with people who think that, because of the gains that Intel and AMD have been able to wrest from the instruction set, x86 has no problems.

    Here's an example: in IA32, an instruction can start anywhere. Addresses divisible by 4, addresses divisible by 2, odd addresses. They can be of many varying lengths. This causes massive problems for pipelining and instruction decoding. If you're trying to decode three instructions at once, how do you know where instruction #2 is until you've finished with instruction #1? After all, they can be many different lengths. Now, clearly, this is not an insurmountable problem, as Intel and AMD have both pulled it off quite well. However, it does go for added expense. That money could be either translated into lower-cost chips, or more features for more speed, were it not for the necessities imposed by the aging instrucion set. The PowerPC ISA (I talk of the PPC because it's what I know best, I believe others are like this as well) has instructions that start on addresses divisible by four. They are four bytes long. Period. Thus, it's very easy to see where instructions #1, #2, and #3 are, because each one is the same length. Barring a branch, it's simple to start decoding multiple instructions at once.

    Now, you sound like you don't seem to care too much about speed. I don't blame you. I'm typing this on a 300MHz computer that I got, new, not more than two months ago. It's plenty speedy for me. Cost, however, is another thing. Rather than putting those saved dollars toward more features for more speed, they could simply pass those along to the consumer. Also, fewer transistors around to support cruddy legacy "features" means less power consumption which means a smaller electricity bill. Particularly significant if your computer is on all the time. Maybe it's not that big a deal for you, but it's something to consider.

    I agree that IA32 works fairly well. Cars with carburetors worked fairly well too, but now almost everything has fuel injection. Propellors on airplanes work pretty nicely in a lot of cases as well, but whenever a big job needs doing, jets have replaced the propellor. Even in a lot of smaller prop-driven airplanes, a turbine drives the prop instead of a piston engine. I could name more, but I think the point is made. IA32 works, but that's no reason to not wish for something better. I realize it's unrealistic, but I hold out hope that these companies that make so much money off of this market will decide to use their massive resources to make something truly new and good, rather than just sucking up profits from more of the same.

    Btw, about the concorde, it still takes three hours in an unreasonably small cabin to cross the Atlantic. Or so I hear. :)

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...