Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology Hardware

Despite Aging Design, x86 Still in Charge 475

An anonymous reader writes "The x86 chip architecture is still kicking, almost 30 years after it was first introduced. A News.com article looks into the reasons why we're not likely to see it phased out any time soon, and the history of a well-known instruction set architecture. 'Every time [there is a dramatic new requirement or change in the marketplace], whether it's the invention of the browser or low-cost network computers that were supposed to make PCs go away, the engineers behind x86 find a way to make it adapt to the situation. Is that a problem? Critics say x86 is saddled with the burden of supporting outdated features and software, and that improvements in energy efficiency and software development have been sacrificed to its legacy. And a comedian would say it all depends on what you think about disco.'"
This discussion has been archived. No new comments can be posted.

Despite Aging Design, x86 Still in Charge

Comments Filter:
  • Re:Let me guess... (Score:5, Informative)

    by leuk_he ( 194174 ) on Tuesday April 03, 2007 @11:13AM (#18588323) Homepage Journal
    "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes."

    I think 50% of the transistors on a modern cpu are cache, you could call that legacy stuff. But the 60% figure makes no sense. For the real, seldom used, legacy instructions, less time is spend on optimizing them in Microcode [wikipedia.org]. And the microcode does not take THAT much space on a cpu.

    Some sources:
    Cpu die picture, est 50% = cache [hexus.net]
    P6 takes ~ 40% for compatibility reasons [arstechnica.com]. And as the total grows, the percentage should DECREASE, not INCREASE. If the amount grows it is for performance reasons, not compatibility reasons.

    However when you count the source "XenSource Chief Technology officler" it is not surprising that backwards compatibility gets that much attention. A main reason virtualization exists is to run older platforms so they are compatible.
  • by Yst ( 936212 ) on Tuesday April 03, 2007 @11:13AM (#18588331)
    Modern English is about 750 years old. English is at least 1550 years old. Tradition is to trace the English presence in Britain to the quasi-historical Anglo-Saxon incursions of the mid-5th century, but migration almost certainly preceded military confrontation. The starting point for the English language (and the Old English era) is the introduction of a continuous Anglic presence to Britain. And that linguistic heritage, termed English, begins at least 1550 years ago.
  • 60% (Score:2, Informative)

    by anss123 ( 985305 ) on Tuesday April 03, 2007 @11:15AM (#18588383)
    From the article:
    "There's no reason whatsoever why the Intel architecture remains so complex," said XenSource Chief Technology Officer Simon Crosby. "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes."
    (Emphasis mine)

    Ehe, according to the latest in depth articles, the legacy cruft take less than 10% of the chip. A far cry from Crosby's claim of 60 percent, and that from a Chief Technology Officer no less :p
  • by burnttoy ( 754394 ) on Tuesday April 03, 2007 @11:15AM (#18588385) Homepage Journal
    Boot loaders tend to be 16bit segment model code 8086, at least they contain enough code to get into 32bit mode. The BIOS will be 16bit legacy code, at least some anyway as a x86 PC chip still boots in Real Mode (there is a 386 embedded variant that doesn't). Windows 9x series is _RIDDLED_ with 16 bit code esp the display drivers, although many of these switch to 32bit mode ASAP the entry points are 16 bit code. Any attempt at killing off 16bit code would stop any 9X system running.

    For WinNT and variants (2K, XP) I don't know how much 16bit code is in there. I've written drivers for 2K/XP and could not find a single 16bit style instruction however even NT series for x86 uses segments. FS is used for process & thread info. IIRC even AMD64 long mode implements FS & GS to make OS porting easier.

    Lastly. 16bit code (instruction operating on 16bits of a 32bit register) are trivial in 32bit mode - all you have to do is preceed an instruction with 0x66 and/or 0x67 to switch a 32bit instruction to a 16bit instruction.

    The problem transcends MSDOS and goes to the BIOS and boot sequence itself. Intel tried to address the with EFI but that seems to be slow gaining traction - probably because of backwards compatibility.
  • by tji ( 74570 ) on Tuesday April 03, 2007 @11:29AM (#18588615)
    The article claims that Windows still requires the old compatibility modes to boot. Is this true? I could see how Win95-like OS's could because they basically boot on DOS. But, for NT and beyond, wouldn't they be fine with removing those old legacy capabilities?

    The question that leads to is: What is gained by removing the legacy junk? The guy from Xen-Source in the article claimed "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes." Which seems ridiculous. Maybe he's talking about 60% of the silicon in a certain subsystem of the CPU, because it certainly can't remove 60% of the total transistors.

    If the savings is minimal, and those modes don't effect anything once you've changed to 32 or 64 bit protected mode, then maybe it's a moot point.

    To really shift the Instruction Set, you obviously have to do it in an evolutionary way. Such as, allowing access to the lower level IS (i.e. the instructions that the x86 gets translated into) in a virtual machine environment. So, you could have a more efficient Linux OS running in a VM, and if the benefits of that are substantial, more people might use that mode for the host OS (which could then run x86 VMs for legacy). It's easy to see that being used for Linux and even Mac OS as their portability is already proven, and they began as modern OS's - working only in protected mode.
  • by misleb ( 129952 ) on Tuesday April 03, 2007 @11:46AM (#18588861)

    4. Price / performance. A segment the x86 have done well in.


    Because of installed base

    Security. Will my x86 progs be supported in 20 years? The answer: yes.


    Again, because of installed base. Although as Apple has shown with the PPC -> x86 migration (and also m68k -> ppc) this isn't such a big factor. Major software is constantly being upgraded and old CPUs can always be emulated if necessary. You might say that performance isn't good, but how fast does a 20 year old app have to run?

    Availability. Hmm... Intel, I'd like to 1 000 000 CPUs. Intel: Sure thing.


    Installed base.

    Good will. What should we buy, Intel or PPC. PPC? What's that? Go Intel! Yes boss.


    Tell that to AMD. They seem to be doing pretty well for themselves.

    -matthew
  • Re:Does it matter? (Score:3, Informative)

    by hey! ( 33014 ) on Tuesday April 03, 2007 @12:42PM (#18589669) Homepage Journal
    Because superior architecture doesn't automatically mean you can produce a better performing processor at the same price. There are other factors, including amortizing the cost of state of the art fabrication facilities over the number of processors sold. The same can be said for fiendishly clever engineering -- it's not just putting lipstick on a pig, with enough dough you can pay somebody to give the pig a bionic skeleton.

    The bottom line is that x86 had a series of killer applications. First it was Lotus-1-2-3, then it was Windows and Office. This tilted the playing field away from architecture and toward the ISA that had the largest installed base. If the 68K had been the killer app winner out of the gate, the world have been a different place; its architectural advantages gave it a ten year lead on the x86. But a superior architecture doesn't help a user run Lotus. Also, the world would have been different if the killer apps were open source. Had the Internet and Linux had come maybe five years earlier, it is possible that we'd still see other processor ISAs in production today for the low end market.

    But in the end it was investment that tipped the scales in favor of x86. Apple's move away from the x86 to the PPC architecture was ultimately due to the fact that the market for 68K processors didn't support enough investment to keep it competitive. By jettisoning the 68K, they were able to get on a development curve that rose faster with less investment. The same goes for PPC to x86. They just weren't selling enough CPUs to justify the investment in keeping the processor line up to date.
  • by TheRaven64 ( 641858 ) on Tuesday April 03, 2007 @01:12PM (#18590151) Journal
    Don't forget two things. The first is that one of the design goals of PowerPC was to be able to emulate x86. For this reason, there are a few things that are a bit ugly in the instruction set, and it feels much less clean than something like SPARC and Alpha.

    When it comes to Rosetta, you should also remember that a lot of the process is not actually being emulated. Every time you call something in the standard library, you are executing native code. There's a small overhead for swapping byte orders of passed arguments, but things like sorting an array, or drawing a bezier path are all handled by native code. The core application logic is still emulated, but you get a huge speed bonus from the native library calls. This is even more noticeable in comparison to something like VirtualPC, since the biggest bottleneck is access to the display or block devices.

  • by ceeam ( 39911 ) on Tuesday April 03, 2007 @02:51PM (#18591873)
    Is it only me or anyone else feels a bit unease about lost opportunity with a good cleanup when we moved to x64 ABI (yes, I don't like "x86_64")?

    I mean:

    http://en.wikipedia.org/wiki/X86_calling_conventio ns [wikipedia.org]

    Why require 16-byte alignment? Oh, so that xmm data can be stored aligned on stack. But how often do you need it? 0.01% of all stack frames or less? Wouldn't it make more sense to do this alignment when entering functions that needs it (3 assembler commands, right?). Why so many registers allocated for args? Why not drop 387 stack support at all - wouldn't that improve context switching times? (Hmm, I may be wrong here)... Finally why MS felt obligated to come with their fucking own version of ABI?! (Ok, that last one is rhetoric)...

    But that's peanuts compared to the whole memory-model / "int" size thing. I mean - do people never learn? At least 16-bit Unicode problems should've tought us something about bean-picking. So now we have cache-spoiling-if-nothing-else 32-bits selecting prefix on every other fucking CPU instruction and you cannot have more than 4 Gigs of executable code, what's that? "640k should be enough for everyone" once again? What if I want some code generator for turning my data into self-processing code? (Old-schoolers may remember "compiled sprites" to get my idea).

    x64 is a great step forward for x86 and it could be better if wiser (IMHO) decisions were made in its infancy. Maybe it's too late now but I guess it will bite our asses in the years to come.
  • by ravyne ( 858869 ) on Tuesday April 03, 2007 @03:11PM (#18592203)
    I drive a '96 cavalier; Its not stylish, its not particulalry fast, no power windows or locks and due to some dings, its not even orthogonal anymore. But it was cheap, relatively fuel-efficient, reliable and it gets me from A to B as fast as I'm otherwise allowed. We geeks tend to pine over these sleek ISAs like MIPS or Power in much the same way that car enthusiasts wax romantic about the latest sports car. For most of us however, practicality forces us to drive more modest vehicles. Its not practical to drive a vehicle that requires some exotic fuel in the same way that its not practical to run a CPU that digests some exotic instruction set, and for the same reasons: Limited use and availability leads to higher cost-of-ownership overall. Economies of scale and past investment lead to comparatively rock-bottom prices. The PC is also bogged by something far more sinister than the x86 instruction set, namely, the PC BIOS. This is only just beginning to go away with Apple having adopted Intel's EFI firmware (OpenBIOS on their PPC systems before that) and the growing list of LinuxBIOS supported motherboards (still not ready for personal use, but getting there). Widespread EFI adoption might take place if Microsoft releases a home OS with the capability of using EFI without the BIOS compatability layer. Another point to watch for in the future is the proliferation of platforms such as the CLR (.NET) and to a lesser extent, the JVM. These sort of platforms serve as an abstraction layer between the instruction set the software is written in, and the instruction set of the hardware on which it runs. With a performance difference of 10% or so now, and that difference shrinking as the technology matures, we'll begin to see that the underlying architecture will loose its hold on being the defining element of the platform. We're already beginning to see x86 technolgy moving towards extensions to make virtualization (such as Xen) more efficient, and I suspect it will not be long before it moves to include features to make the .net platform and similar technologies run more efficently as well. If these sort of technologies eventually become the defacto target for software, we may see a future in which the CPU's sole purpose becomes to efficiently support a higher-level platform that is defined by software. In the Embedded world, x86 does not reign - in fact, x86 is a very small portion of the embedded market. PowerPC rules, followed by ARM and 68k, this doesn't even mention smaller processing tasks run by Microcontrollers like the 8051 or PIC devices. x86 has all but been ousted where engineers are freed from the concerns of backwards compatibility and high performance is not required.
  • by AmunRa ( 166367 ) on Tuesday April 03, 2007 @03:43PM (#18592815) Homepage
    Note: The ABI (Application Binary Interface) isn't defined by the chip, it's defined by the Operating System. Linux generally uses the System V ABI (on x86), simply because it was easier to use a common ABI than invent your own. Keeping the Linux ABI for x86-64 similar to the x86 one makes the whole toolchain much easier to develop. There is nothing stopping you from calling functions in any way you see fit, saving and restoring no information if you want, but you'll have fun when interfacing with other pre-compiled libraries. Most of the stuff in the ABI is there for a good reason, and the optional stuff (previous stack frame) can generally be disabled with the appropriate optimisation setting in your compiler.
  • Re:The X86 is a pig. (Score:3, Informative)

    by chthon ( 580889 ) on Wednesday April 04, 2007 @02:54AM (#18600357) Journal

    I have one example here. It is a small DOS program, called convert.exe, which somehow does transformations in the linking phase of ELF files in a cross platform environment.

    From what I know, VxWorks licensed this from another company, which does not have the sources anymore.

    From time to time this program crashes, due to the output generated by the Tornado compiler. This renders our daily builds unusable for particular targets, which is definitely a show stopper for testing daily our embedded software.

interlard - vt., to intersperse; diversify -- Webster's New World Dictionary Of The American Language

Working...