Despite Aging Design, x86 Still in Charge 475
An anonymous reader writes "The x86 chip architecture is still kicking, almost 30 years after it was first introduced. A News.com article looks into the reasons why we're not likely to see it phased out any time soon, and the history of a well-known instruction set architecture. 'Every time [there is a dramatic new requirement or change in the marketplace], whether it's the invention of the browser or low-cost network computers that were supposed to make PCs go away, the engineers behind x86 find a way to make it adapt to the situation. Is that a problem? Critics say x86 is saddled with the burden of supporting outdated features and software, and that improvements in energy efficiency and software development have been sacrificed to its legacy. And a comedian would say it all depends on what you think about disco.'"
Re:Let me guess... (Score:5, Informative)
I think 50% of the transistors on a modern cpu are cache, you could call that legacy stuff. But the 60% figure makes no sense. For the real, seldom used, legacy instructions, less time is spend on optimizing them in Microcode [wikipedia.org]. And the microcode does not take THAT much space on a cpu.
Some sources:
Cpu die picture, est 50% = cache [hexus.net]
P6 takes ~ 40% for compatibility reasons [arstechnica.com]. And as the total grows, the percentage should DECREASE, not INCREASE. If the amount grows it is for performance reasons, not compatibility reasons.
However when you count the source "XenSource Chief Technology officler" it is not surprising that backwards compatibility gets that much attention. A main reason virtualization exists is to run older platforms so they are compatible.
Re:English is 700 years old (Score:5, Informative)
60% (Score:2, Informative)
"There's no reason whatsoever why the Intel architecture remains so complex," said XenSource Chief Technology Officer Simon Crosby. "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes."
(Emphasis mine)
Ehe, according to the latest in depth articles, the legacy cruft take less than 10% of the chip. A far cry from Crosby's claim of 60 percent, and that from a Chief Technology Officer no less
Not Windows or Linux per se _but_... (Score:5, Informative)
For WinNT and variants (2K, XP) I don't know how much 16bit code is in there. I've written drivers for 2K/XP and could not find a single 16bit style instruction however even NT series for x86 uses segments. FS is used for process & thread info. IIRC even AMD64 long mode implements FS & GS to make OS porting easier.
Lastly. 16bit code (instruction operating on 16bits of a 32bit register) are trivial in 32bit mode - all you have to do is preceed an instruction with 0x66 and/or 0x67 to switch a 32bit instruction to a 16bit instruction.
The problem transcends MSDOS and goes to the BIOS and boot sequence itself. Intel tried to address the with EFI but that seems to be slow gaining traction - probably because of backwards compatibility.
Need for 8086 and real mode? (Score:3, Informative)
The question that leads to is: What is gained by removing the legacy junk? The guy from Xen-Source in the article claimed "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes." Which seems ridiculous. Maybe he's talking about 60% of the silicon in a certain subsystem of the CPU, because it certainly can't remove 60% of the total transistors.
If the savings is minimal, and those modes don't effect anything once you've changed to 32 or 64 bit protected mode, then maybe it's a moot point.
To really shift the Instruction Set, you obviously have to do it in an evolutionary way. Such as, allowing access to the lower level IS (i.e. the instructions that the x86 gets translated into) in a virtual machine environment. So, you could have a more efficient Linux OS running in a VM, and if the benefits of that are substantial, more people might use that mode for the host OS (which could then run x86 VMs for legacy). It's easy to see that being used for Linux and even Mac OS as their portability is already proven, and they began as modern OS's - working only in protected mode.
Re:Weeellll there's also: (Score:3, Informative)
Because of installed base
Again, because of installed base. Although as Apple has shown with the PPC -> x86 migration (and also m68k -> ppc) this isn't such a big factor. Major software is constantly being upgraded and old CPUs can always be emulated if necessary. You might say that performance isn't good, but how fast does a 20 year old app have to run?
Installed base.
Tell that to AMD. They seem to be doing pretty well for themselves.
-matthew
Re:Does it matter? (Score:3, Informative)
The bottom line is that x86 had a series of killer applications. First it was Lotus-1-2-3, then it was Windows and Office. This tilted the playing field away from architecture and toward the ISA that had the largest installed base. If the 68K had been the killer app winner out of the gate, the world have been a different place; its architectural advantages gave it a ten year lead on the x86. But a superior architecture doesn't help a user run Lotus. Also, the world would have been different if the killer apps were open source. Had the Internet and Linux had come maybe five years earlier, it is possible that we'd still see other processor ISAs in production today for the low end market.
But in the end it was investment that tipped the scales in favor of x86. Apple's move away from the x86 to the PPC architecture was ultimately due to the fact that the market for 68K processors didn't support enough investment to keep it competitive. By jettisoning the 68K, they were able to get on a development curve that rose faster with less investment. The same goes for PPC to x86. They just weren't selling enough CPUs to justify the investment in keeping the processor line up to date.
Re:It's hairy to emulate, too (Score:3, Informative)
When it comes to Rosetta, you should also remember that a lot of the process is not actually being emulated. Every time you call something in the standard library, you are executing native code. There's a small overhead for swapping byte orders of passed arguments, but things like sorting an array, or drawing a bezier path are all handled by native code. The core application logic is still emulated, but you get a huge speed bonus from the native library calls. This is even more noticeable in comparison to something like VirtualPC, since the biggest bottleneck is access to the display or block devices.
x64 ABI slightly fucked-up? (Score:3, Informative)
I mean:
http://en.wikipedia.org/wiki/X86_calling_conventi
Why require 16-byte alignment? Oh, so that xmm data can be stored aligned on stack. But how often do you need it? 0.01% of all stack frames or less? Wouldn't it make more sense to do this alignment when entering functions that needs it (3 assembler commands, right?). Why so many registers allocated for args? Why not drop 387 stack support at all - wouldn't that improve context switching times? (Hmm, I may be wrong here)... Finally why MS felt obligated to come with their fucking own version of ABI?! (Ok, that last one is rhetoric)...
But that's peanuts compared to the whole memory-model / "int" size thing. I mean - do people never learn? At least 16-bit Unicode problems should've tought us something about bean-picking. So now we have cache-spoiling-if-nothing-else 32-bits selecting prefix on every other fucking CPU instruction and you cannot have more than 4 Gigs of executable code, what's that? "640k should be enough for everyone" once again? What if I want some code generator for turning my data into self-processing code? (Old-schoolers may remember "compiled sprites" to get my idea).
x64 is a great step forward for x86 and it could be better if wiser (IMHO) decisions were made in its infancy. Maybe it's too late now but I guess it will bite our asses in the years to come.
x86: Its a Good Car, but not a Nice Car. (Score:3, Informative)
Re:x64 ABI slightly fucked-up? (Score:3, Informative)
Re:The X86 is a pig. (Score:3, Informative)
I have one example here. It is a small DOS program, called convert.exe, which somehow does transformations in the linking phase of ELF files in a cross platform environment.
From what I know, VxWorks licensed this from another company, which does not have the sources anymore.
From time to time this program crashes, due to the output generated by the Tornado compiler. This renders our daily builds unusable for particular targets, which is definitely a show stopper for testing daily our embedded software.