Intel Is 'Going Big Time Into 14A,' Says CEO Lip-Bu Tan (tomshardware.com) 24
Intel CEO Lip-Bu Tan says the company is "going big time" into its 14A (1.4nm-class) process, signaling confidence in yields and hinting at at least one external foundry customer. Tom's Hardware reports: Intel's 14A is expected to be production-ready in 2027, with early versions of process design kit (PDK) coming to external customers early this year. To that end, it is good to hear Intel's upbeat comments about 14A. Also, Tan's phrasing 'the customer' could indicate that Intel has at least one external client for 14A, implying that Intel Foundry will produce 14A chips for Intel Products and at least one more buyer.
The 14A production node will introduce Intel's 2nd Generation RibbonFET GAA transistors; 2nd Gen BSPDN called PowerDirect that will connect power directly to source and drain of transistors, enabling better power delivery (e.g., reducing transient voltage droop or clock stretching) and refined power controls; and Turbo Cells that optimize critical timing paths using high-drive, double-height cells within dense standard cell libraries, which boost speed without major area or power compromises.
Yet, there is another aspect of Intel's 14A manufacturing process that is particularly important for the chipmaker: its usage by external customers. With 18A, the company has not managed to land a single major external client that demands decent volumes. While 18A will be used by Intel itself as well as by Microsoft and the U.S. Department of Defense, only Intel will consume significant volumes. For 14A, Intel hopes to land at least one more external customer with substantial volume requirements, as this will ensure that Intel will recoup its investments in the development of such an advanced node.
The 14A production node will introduce Intel's 2nd Generation RibbonFET GAA transistors; 2nd Gen BSPDN called PowerDirect that will connect power directly to source and drain of transistors, enabling better power delivery (e.g., reducing transient voltage droop or clock stretching) and refined power controls; and Turbo Cells that optimize critical timing paths using high-drive, double-height cells within dense standard cell libraries, which boost speed without major area or power compromises.
Yet, there is another aspect of Intel's 14A manufacturing process that is particularly important for the chipmaker: its usage by external customers. With 18A, the company has not managed to land a single major external client that demands decent volumes. While 18A will be used by Intel itself as well as by Microsoft and the U.S. Department of Defense, only Intel will consume significant volumes. For 14A, Intel hopes to land at least one more external customer with substantial volume requirements, as this will ensure that Intel will recoup its investments in the development of such an advanced node.
He also said... (Score:4, Interesting)
He also said they OVERdelivered on 18A
Which they had planned to have foundry customers for and don't.
Maybe I won't take him too seriously about 14A either.
Re: (Score:2)
Re: (Score:3)
Intel has tried to go into foundry business two times already I think. So far they have been far to secretive about their tech stack, making them unworkable for customers. I'm not too sure about their ability to change this time either.
However their ability to go into foundry business is going to be existential sooner or later. Without the volume, income, and experience the foundry brings there will be a hard limit to their ability to keep up with the tech race.
Re: (Score:1)
The only way they retain credibility in that business is if they become a pure foundry player, just like TSMC, and stop competing w/ their customers. But not sure what it will take to win back customer trust on that front
Also, is it just me, or does Intel look way behind the curve? TSMC is already at 2nm, and Intel is crowing about 14nm? Maybe Intel would do better making lower end silicon, such as embedded chips
Re: (Score:2)
Re: (Score:2)
Intel should do an AMD and spin off the fabs... if they ever get them working properly.
When AMD spun off GloFo they were working well, though becoming dated. Intel needs to get a current process up to that first part...
Re: (Score:2)
Re: (Score:1)
Personally I'm guessing it's parts for Nvidia and the architecture of whatever they go in will be only tangentially relevant.
Re:bruh (Score:4, Interesting)
Apple are actually now on their 4th CPU architecture.
Windows on non x86 is always going to be an uphill battle because of legacy software only available in binary form, and the chicken and egg problem which means proprietary vendors will not port software to a new architecture until the user base is the and the user base won't exist before there is software to run.
But the idea of a general purpose computer is quickly fading back into a niche. The vast majority of people neither want nor need a general purpose computer running a complex and fragile operating system. They want a media consumption device, or a gaming device etc so the vast majority of people are better off with game consoles, smart tv, chromebooks, tablets and phones. All of these devices already natively run on ARM.
Moving b/w CPU archs (Score:2)
Precisely. First of all, years ago, NEXTSTEP was ported to PA-RISC and Sparc, before going to x86, and then, as macOS, also moved to Apple's ARM implementations. So Apple has a very good experience on how to move between architectures
Microsoft, otoh, had the same opportunities, and bungled them badly. Back in the day, when aside from x86, they also ran on MIPS and Alphas, they did nothing to make their software run as well on the latter 2. Had they done that, they too could have gotten the same sort o
Re: (Score:1)
Serious question - I have been using Windows 11 for arm on macbooks (with parallels) since 2-3 years now and my experience has been really great.
I want to know So do people mean windows arm on snapdragon is bad? Or maybe they mean gaming, because i don't do that.
Apart from gaming i run a huge shit ton of every type of software, all sorts of beta versions, cracked versions everything and have not found any problems.
Since parallels integrates both desktops if you want OR you can just 4 finger swipe between de
Re:bruh (Score:4, Informative)
I'm the figurehead for a fairly popular open source project, and had a Lenovo notebook with a Snapdragon X (X126100) processor donated to me for working on 64-bit ARM support for it. It came with Windows 11, and Windows runs just fine on it. I've had absolutely no issues. Now the only things I've installed on it are the development tools (MSYS CLANGARM64 environment, WinDbg, git for Windows, etc.), Firefox, 7zip, KeePass, Notepad++ and PuTTY (unless there are some more things I can't remember). But has brilliant battery life, decent build quality, and better performance and an Intel 8th generation i7 (Coffee Lake). I've had absolutely no issues with Windows on a Snapdragon X CPU besides the usual annoyances of having to disable/uninstall a pile of crap that's enabled by default.
Now there are some known issues with the MSYS environment on ARM, but that isn't an issue with Windows itself. I expect that will get better as it gets more uptake and they iron out the wrinkles. But it's completely usable, and the only really annoying limitations are to do with running applications under a debugger, so I have to use a somewhat roundabout process to attach with WinDbg or llvm's lldb.
I don't know what's going wrong for the people complaining. Maybe they're dealing with poor ARM ports of applications? Maybe they have external hardware that lacks drivers? Maybe they have games or other software that's only tested with Intel/AMD/NVIDIA GPUs and doesn't like the Qualcomm Adreno? Maybe they're trying to run x86 software and dealing with poor performance under emulation?
I don't know what you plan to do with your ARM PC, so YMMV. But as an open source application developer, Windows 11 on a Qualcomm-based notebook from Lenovo has been perfectly adequate.
Win-Arm on Macbooks (Score:3)
If you are running Windows for Arm on Macbooks under PARALLELS, it's not the same thing as running that Windows on a Snapdragon. B'cos what you actually have running is Windows for Arm on an M-series. Unlike in the case of a Qualcomm or a Microsoft (if one has a Surface), Apple has already taken care of the hardware integration, and PARALLELS just allocates a subset of that to your VM. It's not like PARALLELS is doing a QEMU, where it actually has to translate the assembly level code from Arm to x86 or s
Re: (Score:2)
The instruction set is irrelevant, just a tiny fraction of a percent of the footprint, and x86 CPUs decode instructions to RISC-like micro-ops anyway. The reason that Apple was successful with ARM was because they had a really good CPU design, not because of the instruction set that it used.
Re: (Score:2)
The x86 instruction set is encumbered by multiple technical and legal impediments which prevented Apple from making their own version.
Re: (Score:2)
The original Apple model didn't want to use their own version. During the AIM alliance, they were happy to let Motorola and IBM make their CPUs, and they were even happy to let Intel be a sole source when they moved to x86: I don't recall them ever using AMD for CPUs. Problem is that Intel failed to give them something that they could use w/ their iPhones, iPods and iPads, and to make things worse, they weren't even hitting the power limits that Apple needed. Hence, they acquired PA-Semi, repurposed that
Re: (Score:2)
Those are reasons why they didn't design their own CPU with an x86 frontend, not a reason why the frontend has any impact on performance or power efficiency (which it doesn't). Modern CPUs are giant beasts with a paper thin instruction set facade wrapped around them. The instruction set that users see (x86, ARM, etc.) isn't even the native instruction set of the CPUs. It's just what gets sent through the translation layer.
Re:Fabbing for ARM64? (Score:4, Interesting)
This is typical Slashdot "I hate something, I can't explain why except vibes, so I'm going to pretend it's dead" fantasy nonsense.
It's never been the case that ix86 is the only architecture that can do things on the desktop or server. IBM "proved" this in the 1990s with POWER and PowerPC. DEC "proved" this with Alpha. HP with PA-RISC. etc.
However, ix86/amd64 remains the primary platform for desktops and servers. Indeed, it basically ran over all of those platforms in the 1990s and became the standard.
Could ARM supercede it? Why would it? There is no installed base of ARM server software, and the only reason it's kinda viable is most GNU/Linux stuff is open source. ARM desktop? Forget it. It's not just games, almost all non-Microsoft applications are ix86/amd64 and the source is usually not available.
It's not even as if Intel is the only company developing or pushing ix86/amd64. There's clue right there in the "amd64" bit - that whole thing came from AMD, not Intel.
So what's the reason for everyone suddenly deciding to ditch decades-worth of software and switch to ARM? Is it more performant? Seems maybe about equal right now on a power-per-watt basis - slightly more efficient at low power, slightly less at scale? Those desktop Apple ARM chips aren't powerful because ARM, they're powerful because the memory is in the same package as the CPU. But equal isn't going to cut it, it has to be significantly faster to move people away from it.
And ARM has significant limitations. It's cache and memory bandwidth unfriendly (which is exactly why Apple had to work on hooking up the memory in such an upgrade-unfriendly manner), using much bigger instructions than ix86/amd64, and needing more of them to express the same algorithms. RISC isn't some panacea, it's a technology that was great in the 1980s, and now ARM has basically had to adopt the same logic that CISC ISA chip makers did in the 1990s to make their own chips run at a decent speed. But less efficiently.
So, no, unless an open architecture non-RISC ISA gets implemented and starts making serious inroads, ix86/amd64 isn't going anywhere right now. And no, it's not because of games.
Re: (Score:2)
Arm had significant limitations in Arm 7. Once they got to 64-bit, they managed to overcome it, and today, Apple produces an Arm CPU that is higher in performance but lower in power. Just look at all the reviews of the M-series line of Macs
RISC has inherent advantages over CISC - that was established way back in the 90s. Only reason x86 won was petabytes of legacy software they had on Windows and DOS before that. Otherwise, both Intel and AMD were adapting RISC techniques in the Pentiums and Athlons,
It is FABulous (Score:3)
Naaa (Score:1)
Will not work out. Intel cannot do anymore.