Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology Hardware

Despite Aging Design, x86 Still in Charge 475

An anonymous reader writes "The x86 chip architecture is still kicking, almost 30 years after it was first introduced. A News.com article looks into the reasons why we're not likely to see it phased out any time soon, and the history of a well-known instruction set architecture. 'Every time [there is a dramatic new requirement or change in the marketplace], whether it's the invention of the browser or low-cost network computers that were supposed to make PCs go away, the engineers behind x86 find a way to make it adapt to the situation. Is that a problem? Critics say x86 is saddled with the burden of supporting outdated features and software, and that improvements in energy efficiency and software development have been sacrificed to its legacy. And a comedian would say it all depends on what you think about disco.'"
This discussion has been archived. No new comments can be posted.

Despite Aging Design, x86 Still in Charge

Comments Filter:
  • Let me guess... (Score:5, Insightful)

    by Anonymous Brave Guy ( 457657 ) on Tuesday April 03, 2007 @10:42AM (#18587763)

    A News.com article looks into the reasons why we're not likely to see it phased out any time soon

    I'm going to go with:

    1. Installed base.
    2. Installed base.
    3. Installed base.

    Did I miss anything?

  • The X86 is a pig. (Score:3, Insightful)

    by LWATCDR ( 28044 ) on Tuesday April 03, 2007 @10:43AM (#18587785) Homepage Journal
    The X86 ISA is a mess. It is a total pig. It is short on registers and it was just an unpleasant ISA to use from day one.
    The problem is that it is a bloody fast and cheap pig that runs a ton of software and has billions or trillions of dollars invested in keeping it useful. I am afraid we are stuck with it. At least the X86-64 is a little better.
  • lock in (Score:5, Insightful)

    by J.R. Random ( 801334 ) on Tuesday April 03, 2007 @10:46AM (#18587817)
    The x86 instruction set will be retired in the same year as the QWERTY keyboard layout.
  • Simple! (Score:5, Insightful)

    by VincenzoRomano ( 881055 ) on Tuesday April 03, 2007 @10:49AM (#18587887) Homepage Journal
    Just like the four stroke engine. It's not the best one, it can be largely enhanced and made better, but it's still here.
    And just like the four stroke engine, modern engines just burn gasoline and push car forward. This is where the similarity with the original engines end.
  • by InsaneProcessor ( 869563 ) on Tuesday April 03, 2007 @10:55AM (#18587987)
    Yes, the instruction set is old, but, it does still work. As a consumer, why should I have to re-invest in software that I purchased and does the job, just becuase my hardware failed, or faster hardware becomes available and I upgrade. Apple bit that one some time ago. Last year, I had an investment of $4000.00 in software when Intel came out with a significantly faster part that was dropping in price. Just by upgrading my hardware (cost $800) my invenstment improved significantly. $4800.00 did not justify the upgrade but the low cost of hardware only, did. Also, there was not learning curve involved.

    You don't buy a new car just becuase the tires need replaceing (well some people do, but that is rarely the fiscally responsible thing).

    If it ain't broke, it doesn't need fixing.
  • by PineHall ( 206441 ) on Tuesday April 03, 2007 @10:56AM (#18588021)
    It has been said that people will not change unless something is preceived to be 10 times better. The problem is nothing has been perceived to be that much better, so people stay with what they know.
    Paul
  • Re:Simple! (Score:5, Insightful)

    by Wite_Noiz ( 887188 ) on Tuesday April 03, 2007 @10:59AM (#18588075)
    I've heard loads of metaphors about why x86 will be around for years to come, but none of the really hold.
    An engine is black-box - petrol in, kinetic energy out (simply) - whereas the architecture on a processor is not.

    AMD and Intel can make as many additions to x86 as they like, but if they stop supporting the existing instruction set, they'll sell nothing.

    I'm sure Linux would be compiled on to a new architecture overnight, but I doubt MS would move any time soon - and their opinion holds a lot of weight on the desktop.

    RISC ftw!
  • by FredDC ( 1048502 ) on Tuesday April 03, 2007 @11:00AM (#18588077)
    If a chipmaker declared its chip could run only software written past some date such as 1990 or 1995, you would see a dramatic decrease in cost and power consumption, Crosby said. The problem is that deep inside Windows is code taken from the MS-DOS operating system of the early 1980s, and that code looks for certain instructions when it boots.
     
    Even new software might (and often does) use the so-called old instructions. If you want to completely redesign the hardware you would also have to completely rewrite the software from scratch as you would not be able to rely on previously written code and libraries. This is simply not feasible on a global scale...
  • Re:Does it matter? (Score:4, Insightful)

    by Zo0ok ( 209803 ) on Tuesday April 03, 2007 @11:04AM (#18588161) Homepage
    And since the 386 consisted of 275000 transistors while modern cpus have more than 200 millions transistors the cost/waste of backwards compability with the 386 is very little.
  • by anss123 ( 985305 ) on Tuesday April 03, 2007 @11:06AM (#18588219)
    4. Price / performance. A segment the x86 have done well in.
    5. Security. Will my x86 progs be supported in 20 years? The answer: yes.
    6. Availability. Hmm... Intel, I'd like to 1 000 000 CPUs. Intel: Sure thing.
    7. Good will. What should we buy, Intel or PPC. PPC? What's that? Go Intel! Yes boss. (Just look how far Itanium got on Intel's name, alone.)

    :D
  • by fitten ( 521191 ) on Tuesday April 03, 2007 @11:12AM (#18588311)
    Already been done, didn't catch on (see Itanium).

    Because there is such a massive amount of installed x86 software base that you'd be throwing away silicon. To be sure that software ran on the most systems possible, software would still be written for x86 and not the 'desired' architecture.

    That being said, OSS tends to have good inroads in that you get all the source so can recompile to whatever architecture you want. However, since x86 is still the huge marketshare, other architectures get less attention. Also, all of the JIT languages (Java, C#, etc.) make transitioning easier IF you can get the frameworks ported to a stable environment on the 'desired' architecture.

    The main problem is that there is *so* much legacy code in binary (EXE) format only (the source code for many of those has been literally lost) that can be directly tracked to money. There are systems that companies continue to use and have so much momentum that changing platforms would require extreme amounts of money to reverse engineer the current system - complete with quirks and oddities, rewrite, and (here is a big part that many people fail to add in) retest and revalidate, that many companies don't want to spend that kind of money to replace something that 'works'.

    There's so much work/time/effort invested in x86 now that it's hard to jump off that train. AMD's x86-64 is a good approach in that you can run all the old stuff and develop on the new at the same time with few performance penalties. However, I don't know if we'll ever be able to shrug off the burden of x86.... at least not for a long time to come. It'd take something truly disruptive to divert from it (and what people are currently invisioning as quantum computing is not that disruption).
  • by stevey ( 64018 ) on Tuesday April 03, 2007 @11:16AM (#18588405) Homepage

    That isn't entirely true. Sure code might exist in the wild which uses old instructions, but it wouldn't need to be rewritten - just recompiled with a suitable compiler. (Ignoring people who hand-roll assembly of course!) (Of course whether the source still exists is an entirely separate issue!

    However with all the microcode on board chips these days it should be possible to emulate older instructions, providing Intel can persuade compiler-writers to depreciate certain opcodes the situation should essentially resolve itself in a few years.

  • by scgops ( 598104 ) on Tuesday April 03, 2007 @11:17AM (#18588415)
    Computer manufacturers have tried making non-compatible machines. Commodore 64, VIC 20, Coleco Adam, Atari ST. They all had their place in time and their niche in the market before fading out.

    Something they all had in common, though, is that they sold better than IBM's mostly-compatible PCjr. I attribute that difference to software and compatibility problems. Because of BIOS differences, a number of programs written for the PC couldn't run on the PCjr. That led to a fragmentation of shelf space at software retailers and confusion among retail customers, and led to customers avoiding the platform in favor of easier-to-understand options.

    I would expect something similar to happen if Intel, AMD, or anyone else started making mostly-compatible x86 processors. It wouldn't sell unless all of the software people are used to running still worked. Sure, someone could take Transmeta's approach and emulate little-used functionality in firmware rather than continuing to implement everything in silicon, but it all pretty much needs to keep working, so why bother?

    Seriously, why would anyone undertake the effort and expense needed to slim-down x86 processors when the potential gains are small and the market risk is pretty huge? No chip manufacturer wants to replace the math-challenged Pentium as the most recent mass-market processor to demonstrably not work right.

    Pundits and nerds can talk all they want about why the x86 architecture should be put out to pasture, but it won't happen until a successor is available that can run Windows, OSX, and virtually all current software titles at acceptable speeds. At that seems pretty unlikely to happen on anything other than yet another generation of x86 chips.
  • Re:Let me guess... (Score:3, Insightful)

    by ooze ( 307871 ) on Tuesday April 03, 2007 @11:22AM (#18588505)
    The x86 dominance is basically a result of two crooked architecure holding each other up: if MS DOS wasn't so crappy that it depends on x86 then the processor could be changed. If x86 wasn't too crappy to properly emulate it, then MS DOS or it's successors could be changed. As it is, we are stuck with both, because noone wants to change both at the same time, and you cannot really change each independently.
    There is something I hope for:
    Vista Tanks mightly, OS X and it's successors become the dominant OS in 10 years. Those are instructions set agnostic, and have been proven to be able to run on multiple platforms with pretty little effort, and does so atm run on 3 different instruction sets: x86, POWER, and the iPhone on ARM. Linux runs everywhere too. And as soon as you have this, there is no reason not to drop the most expensive to develop for and least effcient architecture.
    But as long as people still use MS Operating systems, we will be stuck with x86 and have to pay the price ... energy price.
  • by Anonymous Coward on Tuesday April 03, 2007 @11:22AM (#18588509)
    And the Playstation 3, and the Wii, and your fridge...
  • by astrashe ( 7452 ) on Tuesday April 03, 2007 @11:24AM (#18588547) Journal
    If free software ever goes truly mainstream, and the stacks people use are free from top to bottom, lock in goes away in general. Even hardware lock in.

    A couple of years ago, I was shifting some stuff around and I needed to clean off my main desktop machine, an x86 box. I installed the same linux distro on a G4 mac and just copied my home directory over. Everything was exactly the same -- my browser bookmarks and stored passwords, my email, my office docs, etc.

    A lot of people take Apple's jump from PowerPC to x86 as a sign that x86 is unstoppable. But I'd argue that the comparative ease with which the migration took place shows how weak processor lock in is becoming. The shift from PPC to x86 was nothing compared to the jump from MacOS Classic to OS X.

    The real reason x86 won't go away any time soon is that MS has decided that's the only thing it's going to support, and MS powers most of the computers in the world. Windows is closed, so MS's decision on this is final, and impossible to appeal.
  • by Vexler ( 127353 ) on Tuesday April 03, 2007 @11:26AM (#18588583) Journal
    As part of an operating systems course I am currently taking, we watched a video of a presenter from Intel who lectured on the changes associated with the Itanium processor. In his presentation (see the video at http://online.stanford.edu/courses/ee380/040218-ee 380-100.asx [stanford.edu]), he pointed out that Intel has gone from having one or two major ideas to drive chip design to having fifteen or twenty minor ideas that they can cram in. The thinking is that if they can amass enough of these "little ideas" together, they can probably cobble together enough performance enhancement to justify production and sales of these chips. Part of the issue is that, as the author of this article also admits, there is currently no "big ideas" coming around the bend in terms of truly revolutionary performance increase.

    The problem, though, is that when you introduce many smaller features, you cannot always anticipate how these features will interact with one another. This is why it is counterintuitive to many people that "new and improved" is not always so, and that you actually risk introducing bugs into the design more subtle than you can detect. That, combined with the continuing support for legacy code, means that complexity (and power consumption) goes through the roof with each iteration. While it is a testament of the robustness and versatility of the x86 architecture that it has survived thus far, one could argue that the architecture *had* to survive because we couldn't come up with the next paradigm shift.

    The good news is that there are solutions to this situation. The bad news is that all of the solutions involve massive change in the way the software industry clings to the tried-and-true, or truly revolutionary innovation in chip re-architecture, or billions of dollars, etc. As the article points out, experience with EPIC has demonstrated how NOT to introduce a completely new architecture. There is no easy way out, but there are several possible paths.
  • Re:Simple! (Score:5, Insightful)

    by smenor ( 905244 ) on Tuesday April 03, 2007 @11:33AM (#18588677) Homepage

    Am I reading you wrong? Most modern engines *are* 4-stroke engines...

    I think that's the point, actually.

    If we were going to start over and design the best way to extract usable power from gasoline from the ground up, we could probably do better than the 4-stroke, just like we could do better than the x86 ISA, and just like we could do better than LCDs for flat panel displays.

    The problem is that, if you take an intrinsically inferior technology, and spend years upon years optimizing it, it will have such a head start that it is almost impossible for a newer, 'better', technology to compete.

  • by Simonetta ( 207550 ) on Tuesday April 03, 2007 @11:35AM (#18588709)
    We lose the X86 when another processor comes along that is cheaper, 10x more powerful, and runs all X86 software at the speed that the users consider to be the same as a PC. Until then we keep the X86. Simple as that. Next tech issue, please.
  • by athloi ( 1075845 ) on Tuesday April 03, 2007 @11:41AM (#18588771) Homepage Journal

    So I took a walk this evening, actually while visiting family. We cruise on past this house in a nearby neighborhood and then stop, because I've backtracked. Next to the trash can is a Dell computer. I think, how bad can it be? And take it home. Older processor, dead hard drive, monitor with a bad cap causing intermittent screwups. The only part not fixed is the monitor, but I dropped in an old IDE drive and it's a perfect FreeBSD machine. Heck, if I had the bandwidth at home, I'd serve my website [chrisblanc.org] off of it.

    The Linux box came from three or four older computers, two of which belonged to me, combined in the least-junky case I could find. I'm still not certain of which distro will be "final" on it, but I'm trying Ubuntu now. This machine gets re-imaged at least once a week, because it's the "beater" box for experiments.

    I've also got a Windows XP machine that I love dearly. It's an Intel board, a 2.4ghz P4, some other stuff I forgot. I put it together for $600 and it's more stable than the Windows machines my neighbors bought from Dell. I have no plans to upgrade to Vista for another two or three years, for the largest part because I don't want to buy the hardware.

    Life rewards intelligence if you're willing to apply it. Is this hacking, best practices, or common sense? I also get free jalapeno peppers from a very small garden, if I remember to water it. No "corporate chilis" for me. Linus would be proud.

  • What's wrong with Vista?

    I'll tell you exactly what's wrong with Vista. If you, like many averages shmoes around the world, are in the market for a new PC, you're stuck with Vista. Nothing necessarily wrong with that, except that the shmoe will, as usual, get a $500 dell or a $300 Emachines. Why the hell is he going to spend a grand or more on a PC? Of course, these are pitiful little Duron/Celeron boxes with way too little RAM, lots of bloatware for extra sluggishness, and of course lowest-bidder parts.

    So he'll take that PC home, fire it up, and be pretty much instantly pissed. Not only is it slow and sluggish as hell, but this time he has to contend with a lot of new features that he has no clue or experience about. Depending on his patience, he'll plug away for a day or a few, but eventually he'll call me, or someone like me.

    Now, this is the important part: He's used to XP. He's used to an OS, that while sucky, worked well enough for him, was relatively speedy, so why can't he just have that? Why does he have to have something replaced that worked just to put up with this shit?

    So I will perfrom a downgrade, and I'll happily use a pirated copy to do it, too. As far as I'm concerned, he paid for the OS already, I couldn't give a crap about specific licenses for specific machines. This guy just wants to get on with his life, and that is the service I provide.

    Did my first downgrade a couple of days ago, and I expect to be doing several more this year.

    Now I know you'll all be yelling about getting sufficient RAM for his machine, going in and cutting some of the bloat instead of resorting to piracy for the backwards step, but if you're going to say all that, it's obvious you don't do a lot of house calls.

    Oh, and before I get modded into oblivion by the MS fanboys, look into your hearts. You know I'm right.
  • Re:Does it matter? (Score:2, Insightful)

    by lavid ( 1020121 ) on Tuesday April 03, 2007 @11:45AM (#18588851) Homepage
    Because they co-developed the Itanium with Intel. It didn't make sense for them to push their old architecture when they spent a bunch of money to develop a new one for the same market segment. That's why.

    The real problem with the Itanium is that it came out a few years too late and the x86 emulation hardware was designed to be on par with the chips that were going to come out at the scheduled release time.
  • by Waffle Iron ( 339739 ) on Tuesday April 03, 2007 @11:52AM (#18588941)
    So? Function call sequences have to be executed on RISC CPUs as well. On the X86, most of those instructions are encoded in a single byte each, which is a cache-friendly compact representation. Under the hood, that whole sequence is recast into an optimal representation for the particular chip and usually executes in about two clock cycles. Pre-decoded instructions are usually cached in some form, so the x86-to-RISC translation is not incurred all that often anyway.

    The bottom line is: has any other architecture enabled apps run significantly faster over multiple CPU generations at comparable costs? Nope. As other architecture fads have come and gone, but the X86 just absorbs the best ideas from each and keeps marching along.

  • by Erich ( 151 ) on Tuesday April 03, 2007 @12:02PM (#18589071) Homepage Journal
    Haven't we learned this by now? Why do we keep going over this same stupid premise?

    The Instruction Set of a processor architecture with so many resources available to it doesn't really matter, so long as it isn't utterly and completely braindead. X86 isn't braindead enough to qualify... if you had an intercal [catb.org] instruction set or an One Instruction Set Computer [wikipedia.org] it might.

    You really want to do several things to get performance out of an instruction stream -- register renaming, instruction manipulation (breaking them apart or joining them together or changing them into other instructions), elimination of some bad instruction choices, and a host of other things. You would want to do these things even on a "clean" ISA like Alpha or PPC or MIPS. And if you are doing them, the x86 instruction set suddenly becomes much less of a problem. There are even advantages: the code size on x86 tends to be better than a 32-bits-per-instruction architecture.

    Instruction sets are languages with exact meanings. Which means that you can precisely translate from one instruction set to another. And, as it turns out, you can do it fairly easily and efficiently. Which is why Transmeta did pretty well. Which is why Apple's rosetta and Java JIT compilers work (and Alpha FX32 before that). Which is why AMD and Intel are right there at the top of the performance curve with x86-style instruction sets, because it JUST DOESN'T MATTER THAT MUCH.

    Why didn't Transmeta kick more butt? Because they didn't have the economies of scale that AMD and Intel have. Because they didn't have the design resources that AMD and intel have. Because AMD and Intel had better-tuned processes faster than TSMC or whoever was fabbing Transmeta's chips. THOSE are the most important things, not the instruction set that you have on disk.

    Now a good ISA can help in many ways: SIMD instructions really help to point out data level parallelism. More registers helps a wee bit to prevent unnecessary work done around the stack for correctness. You can get rid of a bit of logic if you can execute without translation. But these things can either be added to x86 (SSE/x86-64) or aren't expensive enough to be worth it on a 100 sq mm, >50W processor. Maybe in an embedded, low-power processor.

  • by Waffle Iron ( 339739 ) on Tuesday April 03, 2007 @12:07PM (#18589147)

    Please tell me how using a prefix byte to bump a 16bit operation up to 32 is efficient encoding?

    It's efficient because you hardly ever need to use a size prefix in normal code. In 16-bit mode, the default is 16-bit operations. In 32-bit mode, the default is 32-bit operations. Prefixes are for unusual cases where you're using the "wrong" size for your current mode.

    Note that a lot of RISC architectures would require multiple 32-bit instructions to do the job that a single x86 prefix byte does because they don't natively support 16-bit operations at all.

  • Idiotic... (Score:3, Insightful)

    by evilviper ( 135110 ) on Tuesday April 03, 2007 @12:10PM (#18589209) Journal
    This is the same idiotic argument as always. They don't even try to change it up a little bit...

    The architectural limitations of x86 were probably true up through the Pentium1 days. After the introduction of Intel's P6, and AMD's K6, everything changed. At that point, x86 was no longer the clumsy CISC snail it used-to be. At that point, and from then on, the fierce competition between Intel and AMD has pushed x86 ahead of every other architecture. Others like Alpha held on to the pure performance crown for a few years to come, but they did so by embracing much higher power consumption. These days, new x86 CPUs are falling in power consumption, not rising. And AMD's Geode CPUs can give you a good performing x86 CPU for embedded systems, OLPC, and anything else, in under 1W. There's really nothing else that is lower power, which still performs as well...

    These days, x86 is more than competitive with everything else in sheer performance, performance-per-watt figures, and far ahead in performance per dollar. One at a time, nearly all the limitations of the x86 architecture, that were so often paraded out by competitors, have been worked around. It's most other architectures which were crippled, in that their short-sighted design was only really good in one area, and they only became popular because x86 wasn't quite there at the time. Meanwhile, x86 continued to develop, addressing those shortcomings, and the others did not. The only competitors these days are Power and SPARC, and the two highest-profile companies using them have long since come around, and started selling x86 themselves.

    Backwards compatibility is only the smallest of reasons that x86 is still around. How many Linux/BSD users continue to buy x86 systems, even though they would hardly notice an underlying architecture change? How many super-computing clusters are x86-based? It's only the Windows world that needs x86 compatibility, and though that's about 90% of the market, the other 10% use x86 anyhow.
  • by Mr. Underbridge ( 666784 ) on Tuesday April 03, 2007 @12:18PM (#18589325)

    Why does he have to have something replaced that worked just to put up with this shit?

    Shhh! If everybody sold good stuff with decent specs and security enabled, you'd be out of business and serving me my lunch. (joking, of course).

    Oh, and before I get modded into oblivion by the MS fanboys, look into your hearts. You know I'm right.

    Who are you, Darth Vader? Search your feelings, Bill...

  • Give it more time (Score:5, Insightful)

    by MarkByers ( 770551 ) on Tuesday April 03, 2007 @12:23PM (#18589403) Homepage Journal
    > Now, this is the important part: He's used to XP. He's used to an OS, that while sucky, worked well enough for him, was relatively speedy, so why can't he just have that? Why does he have to have something replaced that worked just to put up with this shit?

    If instead of giving up after a day, he had tried it for a week or a month, he would have found out how great everything is. Then in a few months he would be used to it and if you try to make him downgrade to XP he will cry.

    There are many great features in Vista, but you have to try it for yourself.
  • by nuzak ( 959558 ) on Tuesday April 03, 2007 @12:32PM (#18589501) Journal
    > Oh, and before I get modded into oblivion by the MS fanboys,

    For gods sakes, express a point of view and STOP FUCKING WHINING ABOUT MODERATION.

    Seriously. Even if you ARE modded down, it doesn't make you some kind of martyr.
  • by Waffle Iron ( 339739 ) on Tuesday April 03, 2007 @12:32PM (#18589513)

    Like it or not, it's 3 ops (push,mov,pop) per subroutine

    Any processor has to do the exact same work, whether the user-visible encoding is done this way or as an "SP indexed" addressing mode. At the micro-op level, it all gets renamed, reordered, etc. so that the same things are happening. Moreover, that particular sequence is so common, in all probability most X86 CPUs have special logic just to optimally execute that entire sequence faster that the naive RISC equivalent.

  • by spaceyhackerlady ( 462530 ) on Tuesday April 03, 2007 @12:35PM (#18589543)

    Just shows you what good marketing can accomplish with garbage.

    Sometimes it's support and marketing that make all the difference. Way back when, IBM introduced a new computer called System /360. It was crude compared to a lot of its competition, but they knew how to sell them, and they supported them well. IBM went on the rule the mainframe world. Their competition are now footnotes in history books.

    One of IBM's competitors gave us the phrase "Sullen but unrebellious" to describe how much money must be spent looking after customers.

    I play with Linux on UltraSPARC (Sun Ultra 5) and StrongARM (gumstix) but am typing this on an x86 Slackware box. Does this mean I too have sold out? :-)

    ...laura

  • by Anonymous Coward on Tuesday April 03, 2007 @12:44PM (#18589707)
    Not only has MS supported non-x86 architectures in the past (NT shipped with PPC, Alpha, and MIPS binaries on the CD), but they support non-x86 architectures now (Itanium and x86-64 are both sufficiently different from x86 to be different architectures).

    MS doesn't have a pony in this race. They just try to support whatever their customers are using. When the RISC market dried up, MS stopped supporting those non-x86 architectures because their customers weren't using them.

    dom
  • Re:Does it matter? (Score:3, Insightful)

    by shani ( 1674 ) <shane@time-travellers.org> on Tuesday April 03, 2007 @12:46PM (#18589735) Homepage
    Low Slashdot-ID? That whippersnapper? Kids today!

    My experiences with Alphas were universally bad. The Unix they ran was a flaky bitch, and in any given cluster you were guaranteed to have a few of the machine go during a long computation. Then again, they were expensive.

    They were quite zippy though.
  • by nbritton ( 823086 ) on Tuesday April 03, 2007 @01:00PM (#18589977)
    Would it be possible to make a legacy free x86 chip? i.e. remove from the processor die real, unreal, VM86, and 16-bit protect modes as well as all traces of the ISA bus, the BIOS, and anything else you can think of? Porting *NIX and Windows to this new platform architecture would be effortless and it would not change userland compatibility.

    We don't need to support 30 years of backwards compatibility!
  • by Animats ( 122034 ) on Tuesday April 03, 2007 @01:12PM (#18590141) Homepage

    The x86 instruction set is a surprisingly good way to build a computer. The reasons aren't obvious.

    First, the original x86 was a huge pain, with that stupid segmented memory arrangement. But IA-32 was better and cleaner; at last there was a flat 32-bit address space. (Yes, there's a segmented 48-bit mode, and Linux even supports it, but at least apps see a flat address space.) AMD-64 is even more regular; the segmented memory stuff is completely gone in 64 bit mode. So there is progress.

    RISC architectures could yield simple machines that could execute one simple fixed-width instruction per clock cycle. The early DEC Alphas, the MIPS machines, and early IBM Power chips are examples of straightforward RISC machines. This looked like a big win. The ALU was simple, design teams were small (one midrange MIPS CPU was designed by about six people), and debugging wasn't hard. RISC looked like the future around 1990.

    What really changed everything was advanced superscalar architecture. The Pentium Pro, which could execute significantly more than one instruction per clock, changed everything. The complexity was appallingly high, far beyond that of supercomputers. The design teams required were huge; Intel peaked somewhere around 3000 people on that project. But it worked. All the clever stuff, like the "retirement unit" actually worked. Even the horrible cases, like code that stored into instructions just ahead of execution, worked. It was possible to beat the RISC machines without changing the software.

    The Pentium Pro was a bit ahead of the available fab technology. It required a multi-chip module, and was expensive to make. But soon fab caught up with architecture, and the result was the Pentium II and III, which delivered this technology to the masses. Then AMD figured out how to do superscalar x86, too, using different approaches than Intel had taken.

    The RISC CPUs went superscalar too. But they lost simplicity when they did. One of the big RISC ideas was to have many, many programmer-visible registers and do as much as possible register-to-register. But superscalar technology used register renaming, where the CPU has more internal registers than the programmer sees. The effect is that references to locations near the top of the stack are as efficient as register references. Once the CPU has that capability, all those programmer-visible registers don't help performance.

    Making all the instructions the same size, as in most RISC machines, leads to code bloat. Look at RISC code in hex, and you'll see that the middle third of most instructions is zero. Not only does this eat up RAM, it eats up memory and cache bandwidth, which is today's scarce resource. Fixed size instructions simplify instruction decode, but that doesn't really affect performance all that much. So x86, which is a rather compact code representation, actually turns out to be useful.

  • by Anonymous Coward on Tuesday April 03, 2007 @01:19PM (#18590279)
    Seriously you must be new here for the "I'll get moderated down for this but..." trick is one of the PRIME Karma Whoring mantras. Simply by inserting it into your statement you can not only be granted immunity to downmodding by fanboys but you just might get some positive modding by the choir to whom you are a preachin'. Sadly it is also needed in this day and age of the rabid fanboy as sort of a garlic/crucifix/holy water shield agaunst said fanboys simply to keep your karma at a decent level. I know, I post this AC since my karma has been terrible for over 6 years now because of an incident with some Mac fanboys. I've never been modded badly since and the good mods I did get have never restored my karma to a positive level. Had I simply added "The Mac asshats will probably mod me down for this but..." I would probably have my perfect karma that I did so long ago. So that's my pitiful story...mod me down if you must ;)
  • Re:lock in (Score:3, Insightful)

    by Dogtanian ( 588974 ) on Tuesday April 03, 2007 @02:10PM (#18591115) Homepage

    The diversification in keyboard layouts is something that shouldn't have happened ever. My home workstation is US (qwerty), my home laptop is BE (azerty) and my work laptop is SF (qwertz).
    Then why don't you just decide which one you prefer and settle on that? You do realise that the legends printed on the keys have no bearing on the operating system, and that you can choose whichever one suits you.... right?

    Of course, this is much better if you can touch type, but even if you can't, this still seems preferable to your current situation.
  • by ncohafmuta ( 577957 ) on Tuesday April 03, 2007 @02:49PM (#18591815)
    What MS needs to do is, on vista install, give users a choice of a theme, the normal vista one, or the XP one. Letting them start with the XP one will give them time to get used to the new features while in a familiar interface. Then when they're ready, they can make the theme switch to the default.
    Sure, you can throw them in the deep end and hope they swim, but given the odds that they might drown and become an anti-MS advocate for the rest of their lives is a big gamble, when you can just ease them in right from the start.
    I'm not say there isn't transition help out there, tutorials (online, built-in), books, but average users don't want to go through all that (even though they should).

    -Tony

Suggest you just sit there and wait till life gets easier.

Working...