Despite Aging Design, x86 Still in Charge 475
An anonymous reader writes "The x86 chip architecture is still kicking, almost 30 years after it was first introduced. A News.com article looks into the reasons why we're not likely to see it phased out any time soon, and the history of a well-known instruction set architecture. 'Every time [there is a dramatic new requirement or change in the marketplace], whether it's the invention of the browser or low-cost network computers that were supposed to make PCs go away, the engineers behind x86 find a way to make it adapt to the situation. Is that a problem? Critics say x86 is saddled with the burden of supporting outdated features and software, and that improvements in energy efficiency and software development have been sacrificed to its legacy. And a comedian would say it all depends on what you think about disco.'"
Let me guess... (Score:5, Insightful)
I'm going to go with:
Did I miss anything?
The X86 is a pig. (Score:3, Insightful)
The problem is that it is a bloody fast and cheap pig that runs a ton of software and has billions or trillions of dollars invested in keeping it useful. I am afraid we are stuck with it. At least the X86-64 is a little better.
lock in (Score:5, Insightful)
Simple! (Score:5, Insightful)
And just like the four stroke engine, modern engines just burn gasoline and push car forward. This is where the similarity with the original engines end.
If it ain't broke, don't fix it (Score:4, Insightful)
You don't buy a new car just becuase the tires need replaceing (well some people do, but that is rarely the fiscally responsible thing).
If it ain't broke, it doesn't need fixing.
Anything 10 times better? (Score:4, Insightful)
Paul
Re:Simple! (Score:5, Insightful)
An engine is black-box - petrol in, kinetic energy out (simply) - whereas the architecture on a processor is not.
AMD and Intel can make as many additions to x86 as they like, but if they stop supporting the existing instruction set, they'll sell nothing.
I'm sure Linux would be compiled on to a new architecture overnight, but I doubt MS would move any time soon - and their opinion holds a lot of weight on the desktop.
RISC ftw!
This says it all for me: (Score:4, Insightful)
Even new software might (and often does) use the so-called old instructions. If you want to completely redesign the hardware you would also have to completely rewrite the software from scratch as you would not be able to rely on previously written code and libraries. This is simply not feasible on a global scale...
Re:Does it matter? (Score:4, Insightful)
Weeellll there's also: (Score:5, Insightful)
5. Security. Will my x86 progs be supported in 20 years? The answer: yes.
6. Availability. Hmm... Intel, I'd like to 1 000 000 CPUs. Intel: Sure thing.
7. Good will. What should we buy, Intel or PPC. PPC? What's that? Go Intel! Yes boss. (Just look how far Itanium got on Intel's name, alone.)
Re:The X86 is a pig. (Score:5, Insightful)
Because there is such a massive amount of installed x86 software base that you'd be throwing away silicon. To be sure that software ran on the most systems possible, software would still be written for x86 and not the 'desired' architecture.
That being said, OSS tends to have good inroads in that you get all the source so can recompile to whatever architecture you want. However, since x86 is still the huge marketshare, other architectures get less attention. Also, all of the JIT languages (Java, C#, etc.) make transitioning easier IF you can get the frameworks ported to a stable environment on the 'desired' architecture.
The main problem is that there is *so* much legacy code in binary (EXE) format only (the source code for many of those has been literally lost) that can be directly tracked to money. There are systems that companies continue to use and have so much momentum that changing platforms would require extreme amounts of money to reverse engineer the current system - complete with quirks and oddities, rewrite, and (here is a big part that many people fail to add in) retest and revalidate, that many companies don't want to spend that kind of money to replace something that 'works'.
There's so much work/time/effort invested in x86 now that it's hard to jump off that train. AMD's x86-64 is a good approach in that you can run all the old stuff and develop on the new at the same time with few performance penalties. However, I don't know if we'll ever be able to shrug off the burden of x86.... at least not for a long time to come. It'd take something truly disruptive to divert from it (and what people are currently invisioning as quantum computing is not that disruption).
Re:This says it all for me: (Score:2, Insightful)
That isn't entirely true. Sure code might exist in the wild which uses old instructions, but it wouldn't need to be rewritten - just recompiled with a suitable compiler. (Ignoring people who hand-roll assembly of course!) (Of course whether the source still exists is an entirely separate issue!
However with all the microcode on board chips these days it should be possible to emulate older instructions, providing Intel can persuade compiler-writers to depreciate certain opcodes the situation should essentially resolve itself in a few years.
Being mostly compatible doesn't pay (Score:4, Insightful)
Something they all had in common, though, is that they sold better than IBM's mostly-compatible PCjr. I attribute that difference to software and compatibility problems. Because of BIOS differences, a number of programs written for the PC couldn't run on the PCjr. That led to a fragmentation of shelf space at software retailers and confusion among retail customers, and led to customers avoiding the platform in favor of easier-to-understand options.
I would expect something similar to happen if Intel, AMD, or anyone else started making mostly-compatible x86 processors. It wouldn't sell unless all of the software people are used to running still worked. Sure, someone could take Transmeta's approach and emulate little-used functionality in firmware rather than continuing to implement everything in silicon, but it all pretty much needs to keep working, so why bother?
Seriously, why would anyone undertake the effort and expense needed to slim-down x86 processors when the potential gains are small and the market risk is pretty huge? No chip manufacturer wants to replace the math-challenged Pentium as the most recent mass-market processor to demonstrably not work right.
Pundits and nerds can talk all they want about why the x86 architecture should be put out to pasture, but it won't happen until a successor is available that can run Windows, OSX, and virtually all current software titles at acceptable speeds. At that seems pretty unlikely to happen on anything other than yet another generation of x86 chips.
Re:Let me guess... (Score:3, Insightful)
There is something I hope for:
Vista Tanks mightly, OS X and it's successors become the dominant OS in 10 years. Those are instructions set agnostic, and have been proven to be able to run on multiple platforms with pretty little effort, and does so atm run on 3 different instruction sets: x86, POWER, and the iPhone on ARM. Linux runs everywhere too. And as soon as you have this, there is no reason not to drop the most expensive to develop for and least effcient architecture.
But as long as people still use MS Operating systems, we will be stuck with x86 and have to pay the price
Re:I remember the good old days (Score:1, Insightful)
Proprietary software locks us in (Score:4, Insightful)
A couple of years ago, I was shifting some stuff around and I needed to clean off my main desktop machine, an x86 box. I installed the same linux distro on a G4 mac and just copied my home directory over. Everything was exactly the same -- my browser bookmarks and stored passwords, my email, my office docs, etc.
A lot of people take Apple's jump from PowerPC to x86 as a sign that x86 is unstoppable. But I'd argue that the comparative ease with which the migration took place shows how weak processor lock in is becoming. The shift from PPC to x86 was nothing compared to the jump from MacOS Classic to OS X.
The real reason x86 won't go away any time soon is that MS has decided that's the only thing it's going to support, and MS powers most of the computers in the world. Windows is closed, so MS's decision on this is final, and impossible to appeal.
Versatility vs. Lack of Vision (Score:3, Insightful)
The problem, though, is that when you introduce many smaller features, you cannot always anticipate how these features will interact with one another. This is why it is counterintuitive to many people that "new and improved" is not always so, and that you actually risk introducing bugs into the design more subtle than you can detect. That, combined with the continuing support for legacy code, means that complexity (and power consumption) goes through the roof with each iteration. While it is a testament of the robustness and versatility of the x86 architecture that it has survived thus far, one could argue that the architecture *had* to survive because we couldn't come up with the next paradigm shift.
The good news is that there are solutions to this situation. The bad news is that all of the solutions involve massive change in the way the software industry clings to the tried-and-true, or truly revolutionary innovation in chip re-architecture, or billions of dollars, etc. As the article points out, experience with EPIC has demonstrated how NOT to introduce a completely new architecture. There is no easy way out, but there are several possible paths.
Re:Simple! (Score:5, Insightful)
I think that's the point, actually.
If we were going to start over and design the best way to extract usable power from gasoline from the ground up, we could probably do better than the 4-stroke, just like we could do better than the x86 ISA, and just like we could do better than LCDs for flat panel displays.
The problem is that, if you take an intrinsically inferior technology, and spend years upon years optimizing it, it will have such a head start that it is almost impossible for a newer, 'better', technology to compete.
We lose the X86 when... (Score:3, Insightful)
BSD on garbage Dell, Linux on spare parts white bo (Score:1, Insightful)
So I took a walk this evening, actually while visiting family. We cruise on past this house in a nearby neighborhood and then stop, because I've backtracked. Next to the trash can is a Dell computer. I think, how bad can it be? And take it home. Older processor, dead hard drive, monitor with a bad cap causing intermittent screwups. The only part not fixed is the monitor, but I dropped in an old IDE drive and it's a perfect FreeBSD machine. Heck, if I had the bandwidth at home, I'd serve my website [chrisblanc.org] off of it.
The Linux box came from three or four older computers, two of which belonged to me, combined in the least-junky case I could find. I'm still not certain of which distro will be "final" on it, but I'm trying Ubuntu now. This machine gets re-imaged at least once a week, because it's the "beater" box for experiments.
I've also got a Windows XP machine that I love dearly. It's an Intel board, a 2.4ghz P4, some other stuff I forgot. I put it together for $600 and it's more stable than the Windows machines my neighbors bought from Dell. I have no plans to upgrade to Vista for another two or three years, for the largest part because I don't want to buy the hardware.
Life rewards intelligence if you're willing to apply it. Is this hacking, best practices, or common sense? I also get free jalapeno peppers from a very small garden, if I remember to water it. No "corporate chilis" for me. Linus would be proud.
Re:English is 700 years old (Score:2, Insightful)
I'll tell you exactly what's wrong with Vista. If you, like many averages shmoes around the world, are in the market for a new PC, you're stuck with Vista. Nothing necessarily wrong with that, except that the shmoe will, as usual, get a $500 dell or a $300 Emachines. Why the hell is he going to spend a grand or more on a PC? Of course, these are pitiful little Duron/Celeron boxes with way too little RAM, lots of bloatware for extra sluggishness, and of course lowest-bidder parts.
So he'll take that PC home, fire it up, and be pretty much instantly pissed. Not only is it slow and sluggish as hell, but this time he has to contend with a lot of new features that he has no clue or experience about. Depending on his patience, he'll plug away for a day or a few, but eventually he'll call me, or someone like me.
Now, this is the important part: He's used to XP. He's used to an OS, that while sucky, worked well enough for him, was relatively speedy, so why can't he just have that? Why does he have to have something replaced that worked just to put up with this shit?
So I will perfrom a downgrade, and I'll happily use a pirated copy to do it, too. As far as I'm concerned, he paid for the OS already, I couldn't give a crap about specific licenses for specific machines. This guy just wants to get on with his life, and that is the service I provide.
Did my first downgrade a couple of days ago, and I expect to be doing several more this year.
Now I know you'll all be yelling about getting sufficient RAM for his machine, going in and cutting some of the bloat instead of resorting to piracy for the backwards step, but if you're going to say all that, it's obvious you don't do a lot of house calls.
Oh, and before I get modded into oblivion by the MS fanboys, look into your hearts. You know I'm right.
Re:Does it matter? (Score:2, Insightful)
The real problem with the Itanium is that it came out a few years too late and the x86 emulation hardware was designed to be on par with the chips that were going to come out at the scheduled release time.
Re:The X86 is a pig. (Score:4, Insightful)
The bottom line is: has any other architecture enabled apps run significantly faster over multiple CPU generations at comparable costs? Nope. As other architecture fads have come and gone, but the X86 just absorbs the best ideas from each and keeps marching along.
Welcome to the late 90s: ISA doesn't matter (much) (Score:4, Insightful)
The Instruction Set of a processor architecture with so many resources available to it doesn't really matter, so long as it isn't utterly and completely braindead. X86 isn't braindead enough to qualify... if you had an intercal [catb.org] instruction set or an One Instruction Set Computer [wikipedia.org] it might.
You really want to do several things to get performance out of an instruction stream -- register renaming, instruction manipulation (breaking them apart or joining them together or changing them into other instructions), elimination of some bad instruction choices, and a host of other things. You would want to do these things even on a "clean" ISA like Alpha or PPC or MIPS. And if you are doing them, the x86 instruction set suddenly becomes much less of a problem. There are even advantages: the code size on x86 tends to be better than a 32-bits-per-instruction architecture.
Instruction sets are languages with exact meanings. Which means that you can precisely translate from one instruction set to another. And, as it turns out, you can do it fairly easily and efficiently. Which is why Transmeta did pretty well. Which is why Apple's rosetta and Java JIT compilers work (and Alpha FX32 before that). Which is why AMD and Intel are right there at the top of the performance curve with x86-style instruction sets, because it JUST DOESN'T MATTER THAT MUCH.
Why didn't Transmeta kick more butt? Because they didn't have the economies of scale that AMD and Intel have. Because they didn't have the design resources that AMD and intel have. Because AMD and Intel had better-tuned processes faster than TSMC or whoever was fabbing Transmeta's chips. THOSE are the most important things, not the instruction set that you have on disk.
Now a good ISA can help in many ways: SIMD instructions really help to point out data level parallelism. More registers helps a wee bit to prevent unnecessary work done around the stack for correctness. You can get rid of a bit of logic if you can execute without translation. But these things can either be added to x86 (SSE/x86-64) or aren't expensive enough to be worth it on a 100 sq mm, >50W processor. Maybe in an embedded, low-power processor.
Re:The X86 is a pig. (Score:3, Insightful)
It's efficient because you hardly ever need to use a size prefix in normal code. In 16-bit mode, the default is 16-bit operations. In 32-bit mode, the default is 32-bit operations. Prefixes are for unusual cases where you're using the "wrong" size for your current mode.
Note that a lot of RISC architectures would require multiple 32-bit instructions to do the job that a single x86 prefix byte does because they don't natively support 16-bit operations at all.
Idiotic... (Score:3, Insightful)
The architectural limitations of x86 were probably true up through the Pentium1 days. After the introduction of Intel's P6, and AMD's K6, everything changed. At that point, x86 was no longer the clumsy CISC snail it used-to be. At that point, and from then on, the fierce competition between Intel and AMD has pushed x86 ahead of every other architecture. Others like Alpha held on to the pure performance crown for a few years to come, but they did so by embracing much higher power consumption. These days, new x86 CPUs are falling in power consumption, not rising. And AMD's Geode CPUs can give you a good performing x86 CPU for embedded systems, OLPC, and anything else, in under 1W. There's really nothing else that is lower power, which still performs as well...
These days, x86 is more than competitive with everything else in sheer performance, performance-per-watt figures, and far ahead in performance per dollar. One at a time, nearly all the limitations of the x86 architecture, that were so often paraded out by competitors, have been worked around. It's most other architectures which were crippled, in that their short-sighted design was only really good in one area, and they only became popular because x86 wasn't quite there at the time. Meanwhile, x86 continued to develop, addressing those shortcomings, and the others did not. The only competitors these days are Power and SPARC, and the two highest-profile companies using them have long since come around, and started selling x86 themselves.
Backwards compatibility is only the smallest of reasons that x86 is still around. How many Linux/BSD users continue to buy x86 systems, even though they would hardly notice an underlying architecture change? How many super-computing clusters are x86-based? It's only the Windows world that needs x86 compatibility, and though that's about 90% of the market, the other 10% use x86 anyhow.
Re:English is 700 years old (Score:3, Insightful)
Why does he have to have something replaced that worked just to put up with this shit?
Shhh! If everybody sold good stuff with decent specs and security enabled, you'd be out of business and serving me my lunch. (joking, of course).
Oh, and before I get modded into oblivion by the MS fanboys, look into your hearts. You know I'm right.
Who are you, Darth Vader? Search your feelings, Bill...
Give it more time (Score:5, Insightful)
If instead of giving up after a day, he had tried it for a week or a month, he would have found out how great everything is. Then in a few months he would be used to it and if you try to make him downgrade to XP he will cry.
There are many great features in Vista, but you have to try it for yourself.
Re:English is 700 years old (Score:5, Insightful)
For gods sakes, express a point of view and STOP FUCKING WHINING ABOUT MODERATION.
Seriously. Even if you ARE modded down, it doesn't make you some kind of martyr.
Re:The X86 is a pig. (Score:5, Insightful)
Any processor has to do the exact same work, whether the user-visible encoding is done this way or as an "SP indexed" addressing mode. At the micro-op level, it all gets renamed, reordered, etc. so that the same things are happening. Moreover, that particular sequence is so common, in all probability most X86 CPUs have special logic just to optimally execute that entire sequence faster that the naive RISC equivalent.
Marketing service and support (Score:3, Insightful)
Sometimes it's support and marketing that make all the difference. Way back when, IBM introduced a new computer called System /360. It was crude compared
to a lot of its competition, but they knew how to sell them, and they
supported them well. IBM went on the rule the mainframe world. Their
competition are now footnotes in history books.
One of IBM's competitors gave us the phrase "Sullen but unrebellious" to describe how much money must be spent looking after customers.
I play with Linux on UltraSPARC (Sun Ultra 5) and StrongARM (gumstix) but am typing this on an x86 Slackware box. Does this mean I too have sold out? :-)
...laura
Re:Proprietary software locks us in (Score:1, Insightful)
MS doesn't have a pony in this race. They just try to support whatever their customers are using. When the RISC market dried up, MS stopped supporting those non-x86 architectures because their customers weren't using them.
dom
Re:Does it matter? (Score:3, Insightful)
My experiences with Alphas were universally bad. The Unix they ran was a flaky bitch, and in any given cluster you were guaranteed to have a few of the machine go during a long computation. Then again, they were expensive.
They were quite zippy though.
legacy free x86 chip? (Score:3, Insightful)
We don't need to support 30 years of backwards compatibility!
Why x86 is better than one might expect. (Score:5, Insightful)
The x86 instruction set is a surprisingly good way to build a computer. The reasons aren't obvious.
First, the original x86 was a huge pain, with that stupid segmented memory arrangement. But IA-32 was better and cleaner; at last there was a flat 32-bit address space. (Yes, there's a segmented 48-bit mode, and Linux even supports it, but at least apps see a flat address space.) AMD-64 is even more regular; the segmented memory stuff is completely gone in 64 bit mode. So there is progress.
RISC architectures could yield simple machines that could execute one simple fixed-width instruction per clock cycle. The early DEC Alphas, the MIPS machines, and early IBM Power chips are examples of straightforward RISC machines. This looked like a big win. The ALU was simple, design teams were small (one midrange MIPS CPU was designed by about six people), and debugging wasn't hard. RISC looked like the future around 1990.
What really changed everything was advanced superscalar architecture. The Pentium Pro, which could execute significantly more than one instruction per clock, changed everything. The complexity was appallingly high, far beyond that of supercomputers. The design teams required were huge; Intel peaked somewhere around 3000 people on that project. But it worked. All the clever stuff, like the "retirement unit" actually worked. Even the horrible cases, like code that stored into instructions just ahead of execution, worked. It was possible to beat the RISC machines without changing the software.
The Pentium Pro was a bit ahead of the available fab technology. It required a multi-chip module, and was expensive to make. But soon fab caught up with architecture, and the result was the Pentium II and III, which delivered this technology to the masses. Then AMD figured out how to do superscalar x86, too, using different approaches than Intel had taken.
The RISC CPUs went superscalar too. But they lost simplicity when they did. One of the big RISC ideas was to have many, many programmer-visible registers and do as much as possible register-to-register. But superscalar technology used register renaming, where the CPU has more internal registers than the programmer sees. The effect is that references to locations near the top of the stack are as efficient as register references. Once the CPU has that capability, all those programmer-visible registers don't help performance.
Making all the instructions the same size, as in most RISC machines, leads to code bloat. Look at RISC code in hex, and you'll see that the middle third of most instructions is zero. Not only does this eat up RAM, it eats up memory and cache bandwidth, which is today's scarce resource. Fixed size instructions simplify instruction decode, but that doesn't really affect performance all that much. So x86, which is a rather compact code representation, actually turns out to be useful.
Stop whining about whining (Score:2, Insightful)
Re:lock in (Score:3, Insightful)
Of course, this is much better if you can touch type, but even if you can't, this still seems preferable to your current situation.
give an interface choice (Score:1, Insightful)
Sure, you can throw them in the deep end and hope they swim, but given the odds that they might drown and become an anti-MS advocate for the rest of their lives is a big gamble, when you can just ease them in right from the start.
I'm not say there isn't transition help out there, tutorials (online, built-in), books, but average users don't want to go through all that (even though they should).
-Tony