Intel Details Cascade Lake, Hardware Mitigations for Meltdown, Spectre (extremetech.com) 74
An anonymous reader shares a report: Ever since Meltdown and Spectre were disclosed, Intel's various customers have been asking how long it would take for hardware fixes to these problems to ship. The fixes will deploy with Cascade Lake, Intel's next server platform due later this year, but the company is finally lifting the lid on some of those improvements and security enhancements at Hot Chips this week.
One major concern? Putting back the performance that previous solutions have lost as a result of Meltdown and Spectre. It's hard to quantify exactly what this looks like, because the impact tends to be extremely workload-dependent. But Intel's guidance has been in the 5-10 percent range, depending on workload and platform, and with the understanding that older CPUs were sometimes hit harder than newer ones. Intel wasn't willing to speak to exactly what kind of uplift users should expect, but Lisa Spelman, VP of Intel's Data Center Group, told AnandTech that the new hardware solutions would have an "impact" on the performance hit from mitigation, and that overall performance would improve at the platform level regardless. Variant 1 will still require software-level protections, while Variant 2 (that's the "classic" Spectre attack) will require a mixture of hardware and software protection. Variant 3 (Meltdown) will be blocked in hardware, 3a (discovered by ARM) patched via firmware, with Variant 5 (Foreshadow) also patched in hardware.
One major concern? Putting back the performance that previous solutions have lost as a result of Meltdown and Spectre. It's hard to quantify exactly what this looks like, because the impact tends to be extremely workload-dependent. But Intel's guidance has been in the 5-10 percent range, depending on workload and platform, and with the understanding that older CPUs were sometimes hit harder than newer ones. Intel wasn't willing to speak to exactly what kind of uplift users should expect, but Lisa Spelman, VP of Intel's Data Center Group, told AnandTech that the new hardware solutions would have an "impact" on the performance hit from mitigation, and that overall performance would improve at the platform level regardless. Variant 1 will still require software-level protections, while Variant 2 (that's the "classic" Spectre attack) will require a mixture of hardware and software protection. Variant 3 (Meltdown) will be blocked in hardware, 3a (discovered by ARM) patched via firmware, with Variant 5 (Foreshadow) also patched in hardware.
Hardware Mitigations? (Score:5, Insightful)
Re:Hardware Mitigations? (Score:4, Informative)
Use AMD instead.
Especially since we're mostly talking about servers here. When AMD's EPYC is on 7nm and Intel is still on 14nm++++ or whatever they are calling it, the choice will be a lot easier.
Even Intel's 10nm doesn't appear that it will be anything like what they had previously told everyone (since they couldn't get it to work).
If they could have pulled off the original 10nm plan, they'd be on a level playing field with the 7nm stuff, but it's looking more and more like Intel will be behind for a while yet.
Re: (Score:2)
Re: (Score:2)
You forgot your sarcasm tag.
Re: (Score:2)
Re: (Score:2)
The few situations when the CPU is the bottleneck and the task can't be parallelized can practically be ignored IMHO.
For gaming that is - there are plenty of real stuff where single thread performance is important.
Re: (Score:2)
Right, that's why you should be moving to Vulkan/DX12 now, the future of high performance gaming. That's where Ryzen + Radeon/Vega kick Intel's tail. Why stay mired in obsolete game engine technology? You need to be ready for the upcoming wave of high performance FPS and VR games.
Re: (Score:2)
Nope, you'll just install those meltdown mitigation windows updates like a good little MCSE. Then you get to explain the 25% performance loss to your boss.
Re: (Score:2)
I hate it when people carelessly claim that ARM is just as vulnerable as AMD and Intel.
None of the ARM CPUs in my tablets and smartphones incorporate speculative execution, and thus, are immune to these attacks.
A few high performance ARM cores have speculative execution and are theoretically vulnerable. However, the vast majority of battery powered ARM devices do not incorporate high performance ARM cores! Battery powered devices are more concerned about conserving energy than raw execution speed, so manufa
Re: (Score:2)
Can I ask what smartphone(s) you have then? IIRC OoO execution for smartphones came ca 2008 and phones not having it now is generally called feature phones, could be wrong though.
Re: (Score:2)
Really? Then you must not have
Any ARM Cortex after A53 [wikipedia.org], which was the first with branch prediction released in 2012 (if you are predicting a branch, then you are executing speculatively). The A57 then bumps this from just branch prediction to full OOO execution.
Any Samsung Exynos after M1 [anandtech.com]. The Qualcomm ones are just warmed over ARMs anyway (with an LTE modem glued on) so nothing new th
Re: (Score:2)
One of the exploits is essentially impossible to protect against without removing performance, one have never existed (Meltdown) and the rest is generally hard to exploit on Ryzen.
You are of course right, I'll dump my computers as soon as I get this abacus on the Internet...
Re: (Score:2)
Use AMD instead.
Sure that works now with Ryzen, but how well does that work for all the times AMD is out of the running? For much of the past 10 years you were far better off buying Intel and living with the performance hit from the patches.
Re: (Score:2)
That is not actually true for server-workloads. The only thing where Intel was better was single-core gaming benchmarks.
"OS/VMM" mean "Not Fixed" (Score:5, Informative)
From the slide in the FA, Variant 1 (Bounds-Check Bypass, one of the worst variants), Variant 2 (Branch-Target Injection), and Variant 4 (Speculative-Store Bypass) are all still relying on OS/VMM mitigations --- which means that Intel has done absolutely nothing to try to address them.
Still. Broken.
Re: (Score:2)
They probably won't until the next major architecture revision. Aside from anything else new flaws keep being found and if they try to patch the current architecture they probably won't get them all, and being incompetent will probably create more.
Re: (Score:2)
Aside from anything else new flaws keep being found and if they try to patch the current architecture they probably won't get them all, and being incompetent will probably create more.
They can't just "patch" them, they have to make actual architectural changes so that things happen in the correct order. If they could just issue a patch, they could have fixed these problems in microcode already, and declared victory over vulnerability.
Re: (Score:1)
These are kludges, not fixes. (Score:4, Informative)
Real fixes require a new security-first attitude at Intel, and a complete chip redesign based on that attitude.
That will take many years to materialize. In the meantime expect to see more vulnerabilities to pop-up (already have) and more ad hoc fixes.
No patches (Score:1)
No patches for me. The whole unit is flawed. Just rip the damn thing out.
Major concern (Score:5, Insightful)
One major concern? Putting back the performance that previous solutions have lost as a result of Meltdown and Spectre.
It's like getting back the "A" grade you lost after they found out you've been cheating. Sure it's a major concern because now you'll actually have to work for your grade. Meanwhile, there are other students who didn't cheat in the first place. Guess which one I'm going to hire?
Re: (Score:2)
Guess which one I'm going to hire?
The cheater obviously. They have shown to be able to get to the top place with far less effort. Providing they prove their ability to treat the sewer they shat in, why hire the innefficient one?
Re: (Score:1)
Re: (Score:2)
Hire a cheater and he will find the most efficient way of taking your money. Spoiler alert: it doesn't involve doing the work you are paying him for.
Re: (Score:2)
Hire a cheater and he will find the most efficient way of taking your money.
Well that is the goal of any employee. It sounds like your renumeration system does not favour outcomes but rather attendance. If you favour outcomes through renumeration the most efficient way of taking money is the most effecient way of achieving outcomes.
Is this a good time to buy Intel stock? (Score:2)
So, has the expected surge in demand been factored into the price of the stock, or is now a good time to buy?
Conversely, there will soon be a bunch of Intel based servers flooding the surplus market. About the time I'll be looking to upgrade my desktop box. Can I pop a graphics card into one of these servers and
Re: (Score:2)
Intel flew too close to the Sun and was burned, much more so than AMD which has not as aggressively complicated their design. Now Intel is patching instead of working toward a systemic solution. It seems like denial and doesn't encourage a lot of confidence.
There is definitely partial shift toward AMD underway. Even Intel has publicly predicted a larger move than has yet been completed and they would be many times more likely to minimize that prediction than overestimate it.
It takes a while for momentum to
Re: (Score:2)
Re: (Score:2)
Bug by bug patches? (Score:4, Interesting)
This seems like an effort to stick a bunch of fingers in holes in a dam when the dam has a systemic design flaw. What are the chances that other problems will be discovered after tape-out of the new processors?
These bugs are an indictment of the complexity of the speedup techniques Intel has used. With complexity comes extra design expense, reductions in yield, reductions in reliability, and now, security issues that were not very foreseeable.
Adding more complexity in the form of changes to address all these little problems does not give comfort that the syndrome is fixed.
This was serious enough to warrant going back to the drawing board and designing in changes that eliminate this class of problems, not the individual problems that we know of. This is a disappointing effort.
Re: (Score:3)
If Intel completely reset the processor to its previous state after a failed speculative execution, then the processor would be immune to all speculative execution attacks--even speculative execution attacks that haven't been discovered yet. But it doesn't appear that that's what Intel is doing. Instead, their strategy appears to be to design a separate patch for each different speculative execution attack after that attack is discovered.
What did you expect? It takes about four years to engineer an new microprocessor architecture. More if you are starting from scratch. I am 99% certain that right now, there is a team of Intel Microarchitects designing such microprocessor from scratch, we may as well see the fruits of their labour in something like six years (maybe more, if there are more slowdownd in the Fab side of things). But in the meantime, Intel needs something to sell, otherwise, the company would go bankrupt before that new micropro
Re: (Score:1)
[--] there are many caveats beyond AMD's control that make it an uphill battle to use their chips.
I barely know anything about the server space, so I'd be interested in hearing what kind of issues there are. Could you shed some light on this? (Special care was tried to be taken to not sound like I'm asking because I doubt your statement - I genuinely am just interested to learn about this)
Re: (Score:2)
Well, I doubt his statement. Ever since AMD started selling their own chipsets, I have found zero drawback to using AMD other than reduced performance — but given the cost:performance ratio comparison between the two, unless you are chasing the absolute maximum frame rate in gaming or similar, there are no real drawbacks. Unless you're at the top of the range, you can solve that problem with money, and usually still come out ahead of Intel by spending less of it.
Back when you had to use a VIA chipset
Re: (Score:2)
In my original post I wrote:
"I favour AMD for servers (since this article is about servers) but there are many caveats beyond AMD's control that make it an uphill battle to use their chips."
the word "servers" is written twice.
The article is about a server chip.
I work with blade servers in telco. Also 4U ones with lots of 2.5" SSDs. Each of those servers cost $20,000,oo onwards.
If you want to discuss $200 mobos for gaming, is OK. I have been using and recomending AMD chips since the 486, K6-3, and now XEN. S
Re: (Score:2)
But that was/is for personal use. For server use, is a different story.
It doesn't matter, even slightly. Once upon a time you didn't have the option to use AMD chipsets for servers or desktops, because they didn't exist. That was a good reason to use intel. Now, for both servers and desktops, you can use AMD chipsets. That reason went away. HTH, HAND!
Re: (Score:2)
I am preparing a blog post (+ LinkedIn series) about it. But since this is slashdot, and not something I write for CIO level people, and since you asked nicely (and are not an Anon Coward), I'll give you the TL;DR preview, I'll simply list the caveats, without much explaing. Please remember we are talking about SERVERS. For companies. Not desktops, not laptops, not gaming rigs, not servers for a home lab. Some people seem to forget that...
0.) If your workload needs anything Intel propiertary, like transacti
Re: (Score:1)
Thank you a lot for taking the time to write the reply, this was very informative! This seems to especially apply to data center kind of servers, and not necessarily so much often affect smaller scale servers (although in some cases may).
So there is indeed some truth behind the Intel's claim of their ecosystem, but it is not at least completely about compatibility (though to some extent also that, such as the transactional memory, VMM etc.), but also about stuff being optimized for Intel, and also the suppl
Re: (Score:2)
I worked with a lot of Finish guys (Nokia, technomen) at the start of my career, even went to finland for a job interview in Jan 2001, but alas, it was not meant to be.
What you say abot performance/watt in ultra large scale deployments (Like Amazon, Azure, facebook, Google) is 100% true, and well known, but my deployments are more large scale (Telcos). And also, if I took it to that level, the point would be lost to people who are still thinkig at the level of a gaming desktop, or a tower server for a 5 peo
Re: (Score:1)
I worked with a lot of Finish guys (Nokia, technomen) at the start of my career, even went to finland for a job interview in Jan 2001, but alas, it was not meant to be.
Interesting! I also worked at Nokia with some Spanish people, though they were in a different team and abroad, so I think I only had a chance to see them once, was around 2008 or 2009 maybe :)
What you say abot performance/watt in ultra large scale deployments (Like Amazon, Azure, facebook, Google) is 100% true, and well known, but my deployments are more large scale (Telcos). And also, if I took it to that level, the point would be lost to people who are still thinkig at the level of a gaming desktop, or a tower server for a 5 people company...
Okay, I didn't really know what your target audience is.
As per the security, AMD (and all the others) has similar issues with speculative execution and side channel attacks, some have more issues, some have less issues, is just that Intel gets the bulk of the publicity for the time being, due to their large size.
You're right, however, and the nature of the speculative execution is such that it is quite difficult to get performance from it without sacrificing some of the security. I saw an idea about resetting the state in case of a failed speculation, but that is quite di
Re: (Score:2)
What are the chances that other problems will be discovered after tape-out of the new processors?
100%. The average CPU product line has hundreds of eratas (hardware bugs) over its life across the entire industry. It just happened that these specific bugs were security related.
Wouldn't it be interesting ... (Score:2)
... if they submitted samples of the CPUs to researchers to find these kind of flaws BEFORE they commit to making the first 100 million of them?
Re: (Score:2)
Seems like many chips have similarities but affect some of them more then others. What's more interesting is that none of this is really being exploited even months after disclosure. obviously a good ideal to try and mitigate what has been found to be weaknesses. But its hardly a huge threat to anyone yet. You can buy into a ARM or AMD and feel secure now, but what about down the road? I don't see any real fixes on the hardware side, only with OS, microcode and bios updates as we move forward for at least the next few years.
My understanding that most of the Intel flaws require physical access to the system. Plus, BIOS and OS updates have been quickly developed to mitigate some of the issues. I'm not sure if this is reason why there haven't been more exploits or if because of some other factor.
Re: (Score:2)
My understanding that most of the Intel flaws require physical access to the system.
Unclear what you mean here, obviously you don't have to poke your finger into the processor to exploit. There is a certain difficulty in exploiting these flaws, true, which is why we haven't seen much in the wild activity on them. But a lesson was learned from the Rowhammer exploit, 'they' said it was too difficult to locate an appropriate bit in memory and know when to flip it, until someone demonstrated a real attack using it. SPECTRE family is much the same, it requires a deeper level of knowledge than t