Intel Mulls Cutting Ties To 16 and 32-Bit Support (theregister.com) 239
Intel has proposed a potential simplification of the x86 architecture by creating a new x86S architecture that removes certain old features, such as 16-bit and some elements of 32-bit support. A technical note on Intel's developer blog proposes the change, with a 46-page white paper (PDF) providing more details. The Register reports: The result would be a family of processors which boot straight into x86-64 mode. That would mean bypassing the traditional series of transitions -- 16-bit real mode to 32-bit protected mode to 64-bit long mode; or 16-bit mode straight into 64-bit mode -- that chips are obliged to go through as the system starts up. [...] Some of the changes are quite dramatic, although the impact upon how most people use computers today would probably be invisible -- which is undoubtedly the idea.
Cutting their own throat (Score:5, Interesting)
Backwards compatibility is one of the prime reasons why X86 has lasted so long. Even today, a tremendous amount of software is 32 bit. Intel would be eliminating one of the primary reasons people still use their chips.
Re:Cutting their own throat (Score:4, Interesting)
Backwards compatibility is one of the prime reasons why X86 has lasted so long. Even today, a tremendous amount of software is 32 bit. Intel would be eliminating one of the primary reasons people still use their chips.
TFS/A says they'd be creating a new family of 64-bit only processors, not (necessarily) eliminating others. If so, this might be good migration path. A more interesting bit from TFA, if this becomes the standard, is:
Apart from eliminating 8086-style 16-bit real mode, and 80286-style 16-bit protected mode, it would also remove 32-bit ring zero, and completely remove protection rings one and two from the architecture.
Just in case the distinction between the x86 protection rings has temporarily slipped your mind, we explained them and how they work in part one of our brief history of virtualization in 2011. Quickly though: ring zero is where an OS typically lives, and ring three is where apps run.
Re: (Score:3)
Doesn't that mean that 32-bit software would run fine, it's just that 32-bit OS would not?
Re: (Score:2)
Seems like a good middle ground to start the transition.
Windows has a pretty functional AOT+JIT solution for x86-32/64 -> aarch32/64 + library compat layer (i've played around with it on my Mac), so at the point where Intel+/-AMD (though I suspect both) decide to finally drop ia32 entirely, the only real concern will be the linux peeps. I don't think most of the Windows peeps will care about the 5% slowdown on the translat
Re: (Score:3)
Re: (Score:2)
This change could speed up boot times and lower chip prices.
Faster and cheaper? Not sure I like where this is going [wikipedia.org] ... :-)
Re: (Score:2)
Noooo how am I going to run FreeDOS now?!?
I do wonder how much it will affect chip prices though. I don't think there is much silicon dedicated to those old features anymore, they are mostly just some extra microcode with no physical implementation at all. Performance of them doesn't matter because hardly anything uses them, and stuff that does is designed for much slower CPUs anyway.
Re: (Score:3, Interesting)
Remember that they already had tried to do this back in 2006 and it lead directly to AMD eating their lunch.
Re: (Score:2)
If you're referring to Itanium, the situation isn't all that similar: x86-64 has been the main PC instruction set for many years now, so 16-bit and 32-bit support is only necessary for legacy applications, which means that a performance hit from emulation will be far less of a problem, as users will likely have updated performance critical applications to 64-bit long ago. Translating x86 to x86-64 will be easier than translating it to a very different instruction set. Plus emulation technology in general is
Re: (Score:2)
The reason why AMD's 64-bit instruction set won out over Intel's though was specifically because it maintained compatibility whilst Intel's did not. And eventually Intel had to give up and switch to AMD's instruction set because people don't like when things break.
Yes, it's just "legacy software" that you think can be easily written off until you run across a legacy app that you need to use. Or you discover that some call in an app that you use relies on some specific function that got removed as part of
Re: Cutting their own throat (Score:2)
Plus emulation technology in general is more advanced these days.
Compare HAXM to QEMU and tell me you still think pure emulation is good enough & doesn't completely suck performance wise. There's a reason why VirtualBox and VMware can run x86 on x86 at 99.9% native speed, but PC emulators for ARM-based Android devices are basically nonexistent. Emulating an x86 PC on an ARM Android device would be about as performant as emulating a PC-XT on an Amiga (entirely via software) was. IE, "not even remotely close to being good enough".
Back in 1986, a 7.15MHz Amiga 1000 coul
Re: (Score:2)
QEMU's TCG engine.... frankly sucks big ass.
It's not entirely fair, because it has to worry about things that unprivileged translators don't, but even running in user mode, it's just not very good.
Re: (Score:2)
Back in 1986, a 7.15MHz Amiga 1000 could emulate a black & white Mac+ reasonably well, because most of the emulation involved native execution with occasional traps for the emulator to intervene.
An Amiga emulating a Macintosh was faster than a Macintosh, because Apple created a graphics-only computer with no graphics acceleration. Apple used off the shelf hardware and it showed. But you did have to have enough spare RAM to hold the ROM AND run AmigaOS AND run MacOS...
Re: (Score:2)
I've heard it was called x86-64 initially, long before release. Operating system developers were given documents in which that term was used, and therefore it stuck in those circles.
Then AMD marketing decided on calling it "AMD64" closer to the actual release.
Re: (Score:2)
The initial specification was for x86-64 [archive.org]
AMD64 was a marketing term, not a technical one.
Makes sense, and it's fucking nonsensical.
What's the implication, that's it's the 64 bit version of the AMD instruction set architecture? Well no, that wouldn't make a hell of a lot of sense, now would it?
Re: (Score:2)
Your misinformation and bullshit is a fucking cancer, here.
Re:Cutting their own throat (Score:5, Informative)
This plan doesn't mean that the bits of 32-bit support that 64-bit OSs use to run 32-bit software are being removed, just the bits of 32 bit support needed to boot a 32 bit OS.
Re: (Score:2)
Backwards compatibility is one of the prime reasons why X86 has lasted so long. Even today, a tremendous amount of software is 32 bit. Intel would be eliminating one of the primary reasons people still use their chips.
Apple completely dropped 32-bit support in 2019 and people got over it. You might be thinking 'well Apple has a small market share anyway', but they did the same thing on iOS in 2017 (the 'appocalypse') and there wasn't a mass exodus to Android. I think if Intel did this, developers would pretty quickly follow and they'd end up with an advantage over AMD a few years down the track.
Re: (Score:2)
Little known factoid: You could still ask Mach for a 32-bit code segment, and it would give it to you.
Of course, macOS didn't ship with 32-libs, so what you could do with that 32-bit segment was limited. But it was used (for things like Crossover)
Re: (Score:2)
Binary compatibility doesn't matter as much as it used to. Way back when software cost a small fortune, you invested for the long haul. If you needed to switch from PC to Amiga, that was going to cost you big. These days we've grown accustomed to timely and free patches to keep software in line with the whims of the OS, not to mention cross platform licensing. I'm no fan of the cloud or subscription services but the benefit tends to be far more graceful transitions between platforms.
When apple dropped 32bit
Re: (Score:2)
I don't know my emulators very well, but Apple seems to be able to run x86 on an Arm CPU (albeit a bit slowly) - surely then, Microsoft (or someone) should be able to run legacy, 32-bit compiled code on a 64 bit CPU at least as well, if not better than the original 32 bit CPU that it as compiled for?
Granted, you won't be able to run a 32 bit OS on a 64 bit CPU, because there's no emulation layer to load the OS into. But that's surely a bit of a niche problem - and solved by VMs if you really need to do it.
Re: (Score:2)
Re: (Score:2)
Re: Cutting their own throat (Score:3)
you don't need an X86 to run Windows any more.
Correction: you don't need x86 to walk windows anymore.
The harsh truth is, yes, you can steroid-up an ARM to run Windows software as performantly as an x86... but in the process, you basically turn it INTO an expensive, exotic de-facto x86. Windows and x86 have co-evolved for decades to optimize each other. ARM hasn't. ARM works well when you have a clean software slate. Running software that's finely tuned for x86, not so much.
Even Linux favors x86. Just compare the performance of Android running naively o
Re: (Score:3)
There have been plenty of non x86 Windows. There was Windows that ran on DEC Alpha chips. There was Windows for Intel Itanium chips. None of them lasted because a lot of what keeps Windows on computers (particularly office computers) is some REALLY old software. Like some businesses are running programs that are decades old because they're niche specific systems that either don't have replacements or the replacements are incredibly expensive.
Re: (Score:2)
Alpha and Itanium both provided emulation for running the old junk, where these architectures failed was the cost. ARM on the other hand is available cheaply.
Re:Cutting their own throat (Score:5, Informative)
Hi. Article author here.
So far, Windows NT ran on, in order:
1. Intel i860 RISC -- its original *native* platform
2. x86-32
3. Alpha/32
4. MIPS
5. PowerPC
6. SPARC - unreleased
7. Alpha/64 - unreleased
8. IA64
9. x86-64
10. Arm
Re: (Score:2)
There is now an ARM based Microsoft Surface laptop running Windows 11... meaning you don't need an X86 to run Windows any more.
Sure, but you need an x86 to run most Windows software. The ARM version is very limited.
Re: (Score:3)
And a Thinkpad, which the same guy who wrote this article -- me -- reviewed on the Reg too:
https://www.theregister.com/20... [theregister.com]
Re: (Score:2)
It only runs software compiled for ARM. So, almost nothing other than Office.
Re: (Score:3)
Arms power the second most powerful supercomputer in the world, one of the most powerful desktop CPUs on the planet (the M1 Ultra), and the most powerful laptops CPUs on the planet (M2 Max)
Asses are for shitting out of, not talking.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Thanks. I can see why that name would have made sense at the time and why they'd want to change it as the company focus changed.
Re:Cutting their own throat (Score:4, Informative)
Hi. Article author here.
That's wrong.
It was originally Acorn RISC Machine.
Then Apple and VLSI invested and it became Advanced RISC Machine.
Then just ARM, then Arm, now arm.
Re: (Score:3)
What processor architecture competes with x86/AMD64 in general and not extremely low power applications?
ARM is for low power. A few desperate attempts to make it an x86 competitor in server space failed miserably, because it's not meant for high performance and so it doesn't scale well in performance. The way to make ARM work as a "sorta kinda" general high performance chip is to constrain software to an extreme degree and then bolt on additional hardware accelerators on top of ARM chip and force everyone t
Re: (Score:2)
What processor architecture competes with x86/AMD64 in general and not extremely low power applications?
Apple’s M-series processors.
Yaz
Re: (Score:2)
I can't help but be amused that Apple's M1 / M2 is somewhat like a modern day 6502 (well 6510) with VIC-II + SID, or perhaps more appropriately a 68000 with Agnus, Denise and Paula
Re: (Score:2)
Arms exist at the very upper echelons of performance in supercomputing, desktops, laptops, and servers.
You have no fucking idea what you're talking about.
Re: (Score:3)
This is the interesting part about a mind of a person with postmodern far left thinking patterns like you. You are utterly incapable of separating "edge case" and "general".
Yes, ARM exists on the edges of all of those things. No, it doesn't exist in those generally. They're overwhelmingly x64.
But your postmodern mind cannot parse this, so it gets a meltdown because ARM can be found in some edge cases, therefore a general statement that it's not competitive with x64 in those markets is untrue.
Do you know wha
Was recently in depth discussed on YT (Score:2)
Re: (Score:2)
Hope it wasn't a YouTube Story!
Backwards compatibility can still exist. (Score:2, Informative)
Re:Backwards compatibility can still exist. (Score:4, Informative)
32-bit programs would still run under a 64-bit OS.
16-bit programs could only run through emulation. Running them under a hypervisor would not be possible.
Re:Backwards compatibility can still exist. (Score:5, Insightful)
16-bit programs could only run through emulation. Running them under a hypervisor would not be possible.
And to be fair, even under emulation they'll still run much faster than their creators ever expected them to run.
Re: Backwards compatibility can still exist. (Score:2)
Re: (Score:2)
Re: (Score:2)
Virtualization requires hardware support for said instructions, so it would not be possible.
Re: (Score:2)
The word you’re looking for is emulation. Which wouldn’t be a problem anyhow because your modern CPU is so much faster than anything designed for a 16 or 32 bit cpu.
Re: (Score:2)
(Insert Subject Here) (Score:2)
Amusing Timing. This came up as an ad on my phone the other day, and I thought, NT 4.0 ? Where am I, 1998? Who the hell is buying these machines for such a stupidly expensive price? - https://nixsys.com/ [nixsys.com]
Seriously. NT 4.0
Re: (Score:2)
Re:(Insert Subject Here) (Score:5, Insightful)
Re: (Insert Subject Here) (Score:2)
And NT4 works as well as the first day the microscope was in use. But neither it or the machine it operates will work well with modern boards so that's why there is a demand for new (or new-old/refirbished stock) of the same era the rest of the equipment is from. And remanufacturing products there is no high demand in the consumer sector for is expensive.
Re: (Score:2)
If the electron microscope is that old, an equivalent replacement is probably going to be more like $100,000. Still cheaper to keep NT4 going, obvs.
Re: (Insert Subject Here) (Score:2)
Replacement boards for industrial machines that rely on proprietaty software that was written decades ago and that the vendor stopped supporting. Ditto for the interface cards for those machines so that's why you see ISA slot boards advertised on that site.
This is why you have factory floors with machines running Windows 2000 or something about as old in this day and age.
Re: (Insert Subject Here) (Score:2)
"wHy dOnT tHeY rEpLaCe tHoSe oLd mAcHiNeS??1"
- Cost of new equipment
- Cost of tearing down the old machinery and hauling it away
- Cost of shipping in and assembling new equipment
- Downtime
Re: (Insert Subject Here) (Score:3)
Re: (Score:2)
Amusing Timing. This came up as an ad on my phone the other day, and I thought, NT 4.0 ? Where am I, 1998? Who the hell is buying these machines for such a stupidly expensive price? - https://nixsys.com/ [nixsys.com]
Seriously. NT 4.0
Clue: There's millions of expensive industrial machines out there being run by legacy Windows.
(and a lot of cheap ones, too, eg. in automative diagnosis...)
Wherefore art thou, El Reg? (Score:2)
Re: (Score:2)
Hi. I wrote the article.
This is a *proposal*. It's not a plan or an announcement. They are thinking about it and this blog post and white paper are for discussion and to gauge reaction.
Yes but... (Score:4, Funny)
Would my Leisure Suit Larry Games on CD still work?
Re: (Score:2)
Re: (Score:2)
Wait... you have a lesbian friend WITH A FLOPPY DRIVE???
Re: (Score:2)
Would my Leisure Suit Larry Games on CD still work?
Most Likely not. Even if it is (mostly) a Win32 app, many programs and libraries of that era had snippets of 16 bit code, so no.
Having said that, scummVM + the assets in the CD may do the trick
Re: (Score:2)
Would my Leisure Suit Larry Games on CD still work?
Sure, just run them in DosBox;
Tell me again (Score:2)
Re: (Score:2)
Because AMD64 crushes ARM at general compute, and most of AMD64 software is coded for that.
Re: (Score:2)
If it's not compatible with every previous X86, then why am I using X86 architecture at all? Microsoft has an ARM64 based Windows 11 Surface laptop now, by the way, although the benchmarks make it look significantly slower than their Intel chip ones.
Because, most likely, you are not using 16bit 8086 or 286 OSs or SW, and most likely you are not using 32bit x86 OSs. Nowadays you are using 64Bit x86 OSs and SW, and 32Bit SW, and those will work fine under the new chips
Re: (Score:2)
If it's not compatible with every previous X86, then why am I using X86 architecture at all? ... benchmarks significantly slower than their Intel chip ones.
You just answered your own question.
Didn't realize they hadn't already! (Score:2)
Especially the 16 bit. Maybe just use an emulator?
Re: (Score:2)
16-bit real mode and v86 mode is hard to use from a 64-bit kernel, as you'd have to do a rather costly dance to bounce out of long mode. So everyone, as far as I know, uses 16-bit emulation if they support real mode code at all. Usually 16-bit code is still done on non-UEFI platforms to initialize text mode from the VGA BIOS, modern graphics cards are a little weird when it comes to getting the I/O port address space up and clocks setup for the correct head. And it's a bit of a chicken-and-egg if you put th
2nd Comming of the Co-Processor? (Score:2)
Think about that,
emulation -> slow, but when you could transfer the datastream for computing on 16-bit and or 32-bit CPUs over a fast (pci.e in every of it's incarnations, should provide more than enough speed) interface to a co-processor providing fast in silicon execution, and perhaps intel would "dare" to build a special interface/instruction into it's processors for attaching those co-processors and use the system ram.
Intel could beat two problems at once
a.) main stream: make a slick unbloated 64-bit
The Register is a joke site. (Score:2)
It is frequently inaccurate and never corrected.
Please link to factual news sites.
Re: (Score:2)
It makes fun of real news, sure. Particularly when the news is due to some douche-bag's actions. That doesn't make it inaccurate nor incorrect.
Re: (Score:2)
Article author here.
Thanks!
So cutting everything Intel? (Score:2)
That seems to be a good idea, their CPU designs suck. Only keeping AMD64 is clearly the way forward.
Re: (Score:3)
Can't you boot a modern OS but run legacy software, or even an entire legacy OS in a VM?
How much is 16 and 32bit support really holding chips back anyways?
This all hinges on AMD. If Intel dumps legacy support but AMD doesn't I think it's an easy win for AMD.
Re: at this point (Score:3)
The x86-64 architecture is ridiculously overcomplicated and that absolutely affects performance
That may well be true, but sorry, nobody gets to claim "complexity is killing us!" when they embed whole POSIX OSes into their CPUs at ring -3.
Re: (Score:3)
x86-64 has comparatively few.
Also, microcode is not the RISC/CISC thing some people think it is.
Many RISC processors have used microcode (including the first Arm), and there's a good chance that Arms will one day again, as well... though maybe not also since node process tech is on fucking fire right now, making cramming ever more MOPs into the instructi
Re: at this point (Score:2)
Nonsense, x86 blows ARM out of the water in performance.
The truth is that it's too hard for compilers to generate optimal code (especially since most code is targeting more than a single micro-arch), and it's easier for the actual architecture to do so as you run it.
CISC is good as a virtual instruction set to target.
Re: (Score:2)
The truth is that it's too hard for compilers to generate optimal code
And why is that? It is because an Intel chip compiles machine language into "microcode" rather than executing it. And then it blames the machine language for being not optimal. Oh, and it is also too arrogant to talk to memory, putting sections of it into caches instead, again blaming the software if memory access does not fit that model (matrix calculus, for example). The microcode-compiling is so complicated that the chip executes things that are plainly wrong in the hope that it could be right ("specula
Re: (Score:2, Informative)
"RISC is nothing new, it is how processors should behave and always have before the current PC chip hell. "
More massive ignorance.
RISC was devised as a way to implement custom processors using relatively simple programmable logic. The idea was that most computing used relatively few instructions such that stripping functionality out of the instruction set, if done carefully, would NOT result in much loss of performance (and could be compensated for). RISC was a new way to implement workstations as an alte
Re: at this point (Score:2)
Does it? Why are Appleâ(TM)s M1/M2 processors so damn quick then?
Re: (Score:2)
What a thoroughly ignorant comment. You realize that RISC lost because the assumptions of x86's overcomplicated nature turned out to be untrue, right?
And yeah, all that microcode, and simplifying "the stack", and simplified "pipelines". It's as though you are stuck in 1990.
Re: (Score:2)
Because these are just words.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Weren't there 3?
Don't forget the A20 gate which cuts off the extra memory in a 286 real mode, to make it work like an 8086.
Re: (Score:2)
Gate A20 was not a function of the processor, it was implemented in the keyboard controller (originally). Also, it was not needed to make 286 real mode work like an 8086, it was added to compensate for a bug that Microsoft added to its linker to exploit address wrap-around. As as result of the MS bug, a vast amount of applications would not start up without gate A20 despite the processor working properly. Gate A20 was a consequence of Microsoft assuming that 1MB was all the memory that would ever be.
Re: (Score:2)
While virtualization wouldn't be possible, emulation certainly would. Granted this enacts a much higher toll on performance, but considering the original hardware that most 16-bit software was targetted towards, even an emulated x86 16-bit mode would be several orders of magnitude faster than the original on a modern chip.
It doesn't much matter if your Core i9 runs like a Pentium 3 when you're running software originally designed for a 286 . . .
Re: (Score:2)
Re: at this point (Score:2)
Yeah, good luck with that, I don't think they care about backwards compatibility any more.
Re: (Score:2)
Its not that these things have a little 286 core ticking away in the corner, but rather the functionality is integrated into the core of the CPU and that affects the complexity of microcode and the design of the pipelines that need to be able to deal with an inherently alien set of instructions and memory operations within its usual 64bit pipeline. The benefits are potentially significant.
Re: (Score:3)
It's dark fucking magic, but it works pretty damn well. Crossover does it using Rosetta 2 on the 64-bit only Macs.
I imagine when such a transition were to take place, Windows could use their equivalent witchcraft for their Arm Windows version and make it so that 32-bit apps could still run, albeit with a small per
Re: (Score:3)
Just FWIW... I wrote this article, and I do test this stuff. Yes it still works fine. :-)
Re: (Score:2)
It will trim a bit off the pre-silicon test/verification suite, that saves some development effort. And more importantly trim time off manufacturing diags, which translates more directly into real money per chip.
The trick is can Intel save a penny and still have the same level of sales from customers. I think probably they can.
Re: (Score:2)
Re: (Score:2)
I think the Wintel alliance is not as potent as it used to be. Microsoft is working hard on the ARM datacenter space right now because there's some serious money to in software, OS, and CSP. But yes, MS and other vendors would need to be involved if any kind of big shift like this. There's a whole lot of stuff to qualify before a design can even land on the roadmap, long before tapeout. I don't see how a chip could show up in less than 2 years, with 3 years being more realistic.
Re: (Score:2)
Running Windows 12 on a pure 64-bit Intel CPU is definitely feasible provided Microsoft use the x86-32 emulation layer from their Pro X tablet.
(A lot simpler and quicker to emulate a Pentium 3 on x86-64 than on ARM-64, methinks)
Meanwhile, the Qualcomm platform that Windows-on-ARM uses will move to 64 bit only sooner rather than later - ARM's latest Cortex-A715 drops 32 bit.