Google's Project Zero Team Discovered Critical CPU Flaw Last Year (techcrunch.com) 124
An anonymous reader quotes a report from TechCrunch: In a blog post published minutes ago, Google's Security team announced what they have done to protect Google Cloud customers against the chip vulnerability announced earlier today. They also indicated their Project Zero team discovered this vulnerability last year (although they weren't specific with the timing). The company stated that it informed the chip makers of the issue, which is caused by a process known as "speculative execution." This is an advanced technique that enables the chip to essentially guess what instructions might logically be coming next to speed up execution. Unfortunately, that capability is vulnerable to malicious actors who could access critical information stored in memory, including encryption keys and passwords. According to Google, this affects all chip makers, including those from AMD, ARM and Intel (although AMD has denied they are vulnerable). In a blog post, Intel denied the vulnerability was confined to their chips, as had been reported by some outlets. The Google Security team wrote that they began taking steps to protect Google services from the flaw as soon as they learned about it.
I wonder... (Score:2)
I wonder who else they informed. There is quite a zero-day hole here.
Re: (Score:2)
Meltdown (Intel-only) is just one implementation of Spectre; Spectre is a whole new family of possible attack vectors that affects everyone. Meltdown makes Spectre easy by forcing the processor to do what you need it to via a security exception in order to read the side effects of speculative processing, rather than hoping that it will. But we could see many forms of this in the future.
The concept that any random javascript could contain a new variant of Spectre which can read protected kernel pages on you
Re: (Score:2)
Assuming I'm reading this right the other forms of SPECTRE rely on CPU, and possibly even computer specific branch prediction patterns. Which means that in theory minor changes to the microcode regarding code prediction could break any exploits currently in use targeting a system.
Re: (Score:1)
Explain something...
The timing part needed to read the data is "easy" is Javascript. No problem there.
But how do you get the pointer to the kernel memory in Javascript, without another exploit to break the virtual machine? I'm pretty sure JS doesn't allow stuff like
(char*)p = (char*)0xdeadbeef;
Re: (Score:2)
Re: (Score:2)
Well, you could load up x86/x64 code via JavaScript typed arrays or blobs.
If you can already do that, an external hacker probably doesn't need to read kernel memory.
Re: (Score:2)
Intel's press release language is interesting. (Score:2, Interesting)
> Recent reports that these exploits are caused by a “bug” or a “flaw” and are unique to Intel products are incorrect.
By using the AND statement there, a casual reader might think these not bugs or flaws in their processors. A close logical reading reveals the only reason this statement is accurate is because the second half is presumed "incorrect".
Re:Intel's press release language is interesting. (Score:5, Interesting)
Re:Intel's press release language is interesting. (Score:5, Informative)
Re: (Score:2)
Re:Intel's press release language is interesting. (Score:5, Insightful)
None of these persons would be fooled by PR speeches.
But the shareholders might be.
Technical Details (Score:2)
Link to technical details for those that want it: https://security.googleblog.co... [googleblog.com]
Re:Technical Details (Score:5, Informative)
Whoops, wrong link. I meant this one: https://googleprojectzero.blog... [blogspot.co.uk]
Reading the vulnerability... (Score:1)
If you aren't running virtual machines then it isn't an issue.
This is more of a server attack and a web host attack.
Re:Reading the vulnerability... (Score:5, Informative)
This is more of a server attack and a web host attack.
You might want to read this Mozilla blog post.
https://blog.mozilla.org/secur... [mozilla.org]
Nope, no virtual machine needed. (Score:5, Informative)
This is more of a server attack and a web host attack.
No, it's not specific to web servers.
They do use web servers as an example of where the exploit might be applied, but it's not specific.
Basically, this exploit allows to abuse the way speculative execution is done to leak information out of the kernel space into the user space.
(And there are presentation at the CCC of successful abuses done... in Javascript. In a browser).
For more details :
most modern CISC processors (Intel - except for Atoms and Xeon Phi - AMD, etc.) are pipelined and do out-of-order execution.
Executing a CISC instruction requires several steps (micro-ops) and for performance reasons, they keep several instruction in flight (Once instruction A goes out of step 1 and into step 2, you can try already pushing instruction B into step 1).
To gain even more performance, CPUs try to be clever about this (instruction B actually needs results of instruction A, so it needs to wait. But the next instruction C actually can already be started, it doesn't depend on anything still in the pipeline).
Bordering on crystal ball-clever (the next instruction B is a conditional jump. But it looks like a loop, so there's a high chance that it will jump back and repeat. We might as well start working back on instruction A, in case we are correct about this jump).
That's speculative execution : working in advance on stuff that might not even be needed.
(Sometimes, you end up needing to bail out of your speculation, throw the work away and restart because you got your crystal ball wrong. But it's better than just sitting there waiting).
now about memory :
any modern processor worth its salt has memory protection, meaning it handles access rights : Which process can read-write which virtual addresses ?
Usually, sensitive information in the OS is shielded away from the regular software.
On a modern Linux, you can't crash the whole system by writing junk at the wrong address, like you used to do in the old MS-DOS days.
If your software attempts to read something out of the system, the read attempt will be rejected.
the exploit relies on how these both play together.
It happens to be that, in the case of Intel's processors (but not of AMD's), the step where the memory page is loaded from the DRAM stick into the cache happens before the check if the read is valid.
By the time the Intel CPU does the check and notice that the read is invalid and rejects it, quite a lot has happened.
(Things got loaded into cache, other instructions have started their speculative execution in the pipeline, etc.)
These things are measurable (you can measure the timing of some computation to guess what's in the cache and what's not).
Meaning that it's possible to leak sensitive information, that normally pertain in to the OS and shouldn't be application-accessible, by doing a ton of such speculative-execution and timings.
At CCC there was some presentation of this done in javascript: Technically, your browser right now could be executing some random javascript shit from some shaddy website in one of your background tabs and trying to learn as much from your OS as possible.
Such information could further be used while mounting privilege escalations, or other attacks.
In the specific situation of AMD processors, the check is done much earlier (according to their lklm post) and thus not much else has happened already, and there's not much leak from which you could learn.
I have no idea how ARM64 are affected. (But it might also be the cache getting populated before the read attempts get rejected).
Re:Nope, no virtual machine needed. (Score:5, Informative)
Yes, the problem is if you check for page faults before starting executing a branch, you must check page faults for all branches, but if you check it post factum you need to do page fault check only for the correct branch, thus greatly reducing performance penalty of memory protection checks.
Re: (Score:2)
There's still a lot to be tested.
One thing that's not been tested is the leakiness where you have mixed levels in a process, like hardware acceleration in browsers, or games using GPU, on the Linux side. DSP's etc also need testing.
Re: (Score:1)
The important observation is AMD succumbed to this attack only when they were able to run user controlled code with kernel privileges, bypassing AMD rejecting the loads early. Tightening up on kernel exec exploits can fix the problem for AMD and spoils Intel's attempt to spread the blame.
Re: (Score:2)
The problem is, due to the Unix architecture, a lot of the GPU system lives in the kernelspace while still executing userspace code, and a process can thus straddle both.
On Windows, due to the GPU drivers being usermode, that's mitigated somewhat, but still not entirely safe.
GPU: user-supplied code (Score:2)
The problem is, due to the Unix architecture, a lot of the GPU system lives in the kernelspace while still executing userspace code, and a process can thus straddle both.
Yeah, but actually... Nope. Not at all.
The only tiny bit that is running in kernel is the driver that receives the command stream and passes it to the actual physical GFX hardware for rendering.
That's the DRM module, the tiny stuff with ".ko" at the end ("amdgpu.ko", etc.)
Everything else in the rendering stack is handled by libraries (mesa's "libGL.so", "libdrm.so" and its hardware specific variants). All these libraries are in charge of handling all the simpler and nicer language and API that your software
Re: (Score:2)
Yes, the tiny DRM bit, that controls mode setting, memory management via DMA-BUF(which conveniently also allows for CPU access...) and a whole lot of other neat kernelspace stuff.
DMA-BUFs kmap in particular, used together with Spectre, will definitely need to be tested.
Also, with CUDA, some OpenCL implementations, and some Vulkan implementations, you can build Compute Kernels that run both on GPU and CPU.
Couple all those above, with the move towards UMA, and you have some serious testing that needs to be do
Arbitrary code. (Score:2)
Yes, the tiny DRM bit, that {... long list skipped ..}
None of which executes arbitrary code provided by the end-user, which was the entire point of the discussion.
All the long list you give are fixed functions that the DRM performs when called.
When given arbitrary code, none of it gets executed in kernel space. The kernel code only performs the task of pushing that code to the GPU for execution, it doesn't execute anything itself.
(Also, compared to the actual Mesa userland, the DRM bit *is* small, even the parts which are not concerned by the execution of arbi
Re: (Score:2)
Array bound checks (Score:2)
how do you get a pointer to kernel memory in JS without needing to break the virtual machine?
var *p = (char *)0xdeadbeef;
Answering the "how to do random memory access in a language without pointers" part of the question: by abusing arrays and boundary checks (link to a Spectre abuse) [react-etc.net] (Note that it's a Spectre abuse, not a Meltdown abuse. For the meltdown I'll have to track the CCC link).
If a piece of javascript is using the ASM.js specific sub-dialect, it will be JITed to actual x86-64 machine code (in the example each statement of the javascript is translated into one or two machine code instructions).
If you carefully craft
Re: (Score:3)
Intel are in full damage control, but they deserve to lose business after this disaster and ME.
After this disaster and YOU? What did you do?
AMD64: 2 separate things (Score:5, Informative)
Doesn't affect AMD64
The horrible leak that gives access of kernel information to any userspace software that was revealed yesterday doesn't affect AMD64 :
AMD processor reject invalid access much earlier in the pipeline and nothing much happens before that point (e.g.: loading into cache) that could be measured by timing, etc.
In the google paper, they are abusing a different set of anomalies were an application end's up reading it's own memory (yay... ). That *could* be affecting AMD64, but :
- it's only an application in user-space accessing it's own in user-space memory.
- by enabling a few non-standard kernel settings, you end up with a situation where you can send eBPF (the bytecode used by modern packet filtering) to a in-kernel JIT, and it's execution will end up with some in-kernel code reading its own in-kernel memory.
The main big difference, the take-home message:
- on Intel CPU, you have a violation of boundary separation : an end-user application could access information leaking out of the kernel.
- on AMD CPU, this does not happen : you only access information on the same side of the separation boundary.
Or in other words :
- On Intel are in deep shit right now. They need a serious circumvention around it. It means context switch - each time a software calls a system call (e.g.: to access the file system) - the OS needs to flush out all the sensitive information to make them unreachable by the exploit. The end result : massive performance hit.
Re: (Score:2)
Re: (Score:2)
Browsing and games have taken a hit in my use case: Lots of small file accesses, network I/O and many processes active with GPU and other kernel level functions.
Re: (Score:2)
Flushing (Score:2)
The flushing it self is time consuming.
The thing is, once flushed, there's no address that the exploited user-land software can attempt to read to guess stuff based on timings.
The memory-protection-violating speculative access still could happen, but there's no sensitive address at which you could send it.
Re: (Score:2)
- on Intel CPU, you have a violation of boundary separation : an end-user application could access information leaking out of the kernel.
- on AMD CPU, this does not happen : you only access information on the same side of the separation boundary.
All of the people saying "Ha! Intel only! AMD is better!" are missing the point. The concern isn't only about the specific attacks devised so far. The concern is that we have a whole new class of attack, exploiting a fundamental feature of the architecture of all modern CPUs. Yes, AMD is less vulnerable to the attacks so far devised, but that is an accident. AMD didn't design to protect against this class of attack, because they didn't know about it.
As Bruce Schneier likes to say: Attacks al
Attack class vs. whole design (Score:5, Informative)
The concern is that we have a whole new class of attack, exploiting a fundamental feature of the architecture of all modern CPUs. Yes, AMD is less vulnerable to the attacks so far devised, but that is an accident. AMD didn't design to protect against this class of attack, because they didn't know about it.
Maybe the attacks are tiny bit interesting because they abuse the speculative execution.
(CPU starting to execute stuff before hand, for performance reason).
But there's a big major difference.
In the case of the Meltdown exploit, the stuff that affect Intel-only CPU, the whole guarantee that memory protection gave don't hold anymore.
The MMU is made completely irrelevant.
It entirely breaks whole concepts of computer security.
You might as well go back to MS-DOS era / pre-68030 era, when any piece of code could read/write any arbitrary memory location without any restriction.
It's BIG.
Intel has made a bit of a gamble : for speed purpose, it's a bit faster to postpone the check a little bit further down the pipeline.
AMD has made a conscious security choice : check rights as soon as possible, because that's what is the most security sensible stuff to do, even if it means taking a tiny performance hit because you need to make more checks on more potential branches. It's more correct this way.
AMD hasn't specifically planned the Meltdown exploit ahead of time, but they have taken the formally correct way to handle security, and it has payed in the long term (the CPU didn't end up affected once Meltdown was discovered) even if it did take a small performance hit in the short term (didn't benefit from the tiny performance increase that Intel did).
Again, due to Meltdown exploit Intel has broken fundamental tenet of memory protection. (Which just happened to not have been made clearly visible until recently, because nobody though about this specific timing exploit. But this has been "at risk" since the first Pentiums whose speculative execution was allow to go past security checks).
The Spectre exploits, of which one is also affecting AMD CPUs is in an entirely different league.
Whereas Meltdown on Intel CPUs goes across limits that should have been held by memory protection,
nothing in Spectre exploit is accessing something that the exploited application didn't have already access to.
It's simply a way for getting around some checks that might be in the way.
i.e.: that application might be making checks to array boundaries, before accessing them.
Due to speculative execution, the check that controls if we're not accessing out of bound might not have finished yet, and yet the actual invalid read might be in the pipeline already.
I doesn't give you sudden access to things that you shouldn't have access to. It just gives a way around some type of safety checks that might exist in the code you're trying to abuse.
It's a bit exotic and has some air of novelty, because it uses the speculative execution of modern CPU for a change.
But fundamentally it's a timing side-channel, not much different than other timing attacks done for quite some time (even remotely), hence the big work against data-dependent jumps in cryptography code.
And although it does open a couple of opportunity, the big deal isn't that much in the exploit itself.
Mostly, it's a big slap in the face of all "rust-troll" who come trumpeting for array limits checks whenever there's a buffer overflow exploit:
Memory access check should lift any responsibility from writing stupid code.
Yes, to bad that some of the checks can be slightly by passed, but:
- Why the fuck are you enabling non-standard kernel option to enable user-supplied JITed byte-code in the kernel ? User-supplied stuff in-kernel, what could possibly go wrong ?
- Is keeping sensitive stuff, like the storage of the password manager, and dangerous stuff, like execution remotely-provided javascript, in the same process a reasonable stuff to do ?
The k
Re: (Score:2)
Note that the attack does allow code to spy on memory in the same process on AMD. This seems at first glance to be a non-issue... until you consider that lots of processes -- like your web browser -- run JITed code from untrusted sources. Malicious Javascript able to read any data from the browser process is a big deal. Even if your browser uses a separate process per tab, this means that resources on a page can break the same origin policy. With Chrome you can optionally enable strict site isolation which
Software designed flaw (Score:2)
This seems at first glance to be a non-issue... until you consider that lots of processes -- like your web browser -- run JITed code from untrusted sources.
As I've mentionned in the end of my long rant, here the main problem isn't that this peculiar flaw enables the process to inspect its own data.
The main problem is the actually stupid design of putting sensitive data and remotely-provided arbitrary code in the same security container.
Spectre is just one possible attack in this context. By the end of 2018, there's surely going to be another exploit that could hose the same browser.
The stupid mainly comes from the way browser are designed (or in Google's demo,
Important note (Score:2)
It was well known that on Pentium line cpus, a speculative execution branch can access protected memory, but it will just cause page fault in the end.
The first practical access timing exploit was discovered in 2016. Googlers just found out an even easier way last
Here's what I really want to know... (Score:1)
What about my Commodore 64?
In all seriousness.... (Score:5, Informative)
What about my Commodore 64?
In all seriousness :
- old, in-order, non-pipelined CPU like the 6502 in your good old trusted C64 don't do speculative execution and thus aren't affected specifically by such exploits.
but:
- your 6502 doesn't do any form of memory protection : any piece of software can access any part of the whole system (because poking weird memory location is how you control the hardware on such old system) so any software has full access to anything.
So you C64 is leaking sensitive information.
(Later 68k motorola CPU (68030 and up) eventually started to include an built-in MMU to protect memory access, and thus later Amiga machine featuring them (A2500/30, A3000) can be made imune to OS information leaking into userland. That would the first Comodore hardware - vaguely remote cousin of your C64 - to do so)
Yup, i'm giving a technical answer to a joke.
Re: (Score:2)
- old, in-order, non-pipelined CPU like the 6502 in your good old trusted C64 don't do speculative execution and thus aren't affected specifically by such exploits.
If I'm reading this correctly, older Intel Atoms are safe because they are in-order CPUs ( https://spectreattack.com/#faq [spectreattack.com]). I still have an Atom from 2010, and it's already slow enough so I'd rather leave it without KPTI. Of course, my important servers are all AMD.
Intel Atoms (Score:2)
If I'm reading this correctly, older Intel Atoms are safe because they are in-order CPUs ( https://spectreattack.com/#faq [spectreattack.com]). I still have an Atom from 2010, and it's already slow enough so I'd rather leave it without KPTI. Of course, my important servers are all AMD.
...and same for Xeon Phi.
(Which are basically the same kind of in order approach like Atoms, but linked together with a ginormous SIMD unit - the AVX512 - some kind of ultra-SSE/AVX on steroids that border onto GPU territory. That shouldn't be a surprise, as Xeon Phi are basically what Intel salvaged out of their failed Larrabee GPU experiments).
According to the Wikipedia article [wikipedia.org] about Atom architecture, there's only one single micro-ops ever in flight from a given process (though they DO hyperthreading an
Re: (Score:2)
AMD, ARM mostly immune to the bad stuff (Score:5, Informative)
There are two exploits revealed here: Meltdown and Phantom
Intel, AMD, and some/all ARM chip are vulnerable to at least one of the two Phantom attacks, but patching Phantom will not produce any significant performance reductions.
At this time, only Intel systems have exhibited vulnerability to Metldown. Patching Meltdown comes with serious consequences.
So AMD is basically correct in stating that they are not in the same position as Intel .
Boundary violations (Score:5, Interesting)
Basically :
AMD checks access rights first and if rejected nothing much happens.
Meaning no leaks from kernel information into user-space running software.
- Google only demonstrated a in user-space software accessing its own in user-space info.
- And by using some non standard settings, it's possible to give bytecode to that kernel, and that piece of in-kernel software will access its own in-kernel info. (But you're already on the other side of the kernel fence)
Nothing gets accross the kernel fence.
Intel checks access rights much later on. By that time quite a lot has happened (e.g.: things could have been loaded in the cache, etc.). By measuring those things, you can deduce information that you should not have access to.
It means that a user-space software could end up getting sensitive information that normally should stay in-kernel.
These subtle timings of cache enable you to get information accross the kernel fence into user-land.
To mitigate these, each time a user-land software calls into a kernel function (e.g.: filesystem access), the OS needs to flush all it's space from the accessible space. This comes at a big performance cost.
Re: (Score:2)
I don't see how a different timing would allow you to deduce anything from something that was loaded into the cache. A cache load on modern CPU's consists of loading multiple bytes at a time (8, 16, 32) and a slight timing difference won't tell me the value of those --
just if something was loaded or not, not any of the individual hundreds of bits.
Now, *if* you could do a second read on the same location *after* it was loaded into the cache (perhaps bypassing the security check as it won't hit DRAM)
Re: (Score:3)
The way this works is that you don't load the protected data into cache, but you use it in a subsequent instruction to load one of the two addresses that you do have access to into cache. I.e., in some pseudo-assembly:
load r1, [protected_addr]
and
AMD's newer CPUs are not affected (Score:1)
Google did not test these vulnerabilities on any Zen based CPUs. They tested only on older processors:
"AMD FX(tm)-8320, AMD PRO A8-9600 R7"
https://googleprojectzero.blog... [blogspot.com.es]
AMD: no boundary violations (Score:2)
If you dig into the details :
AMD actually don't violate boundaries.
As in their LKLM post, they do the access rights checks before anything else, and if rejected nothing much happens that can be timed.
Meaning there's no leaking of kernel information into user-space programs.
The only thing that Google successfully demonstrated is :
- leaking some users-space's program own information (yay!...). There's not much boundary violation here.
- using some non-standard linux kernel settings, to send eBPF (the bytecode
Re: (Score:1)
I think I understand it better now.
There are actually, 3 vulnerabilities: 2 spectre and 1 meltdown.
AMD Zen CPU's are actually affected by the first spectre vulnerability and they admit to that: https://www.amd.com/en/corpora... [amd.com]
The other Spectre vulnerability and the meltdown don't affect Zen. Meltdown is the vulnerability that needs the KPTI patch. Presumably there is some other patch on the way to fix spectre.
Details. (Score:2)
Presumably there is some other patch on the way to fix spectre.
And according the cited article, the mitigation to fix spectre is much less costly.
Also Spectre exploits only basically works around things like array-boundary checks.
i.e.: the check that controls if you're not reading past out-of-bound memory might not have finished yet, and the actual invalid read might have entered the pipeline.
Basically, it's a slap in the face of all "rust-trolls" who are touting array limits check, whenever there's a buffer overflow exploit mentioned.
Using bound checking doesn't excus
JIT correctedness (Score:3)
mostly mean that writing a JIT compiler for untrusted code is actually much, much harder than the people writing them thought.
Well, though you'll have to conceed that they didn't make any fundamental flaw in the actual JIT implementation - this time.
It's on the much larger scale of design flaw of putting a JIT running externally supplied arbitrary code in the same context. Something bad is going to happen eventally.
Today it might be more exploiting some weird CPU behaviour,
tomorrow it might be exploiting a straight flaw in the JIT compiler.
Still putting both in the same place was a bad idea. Something was going to happen no matter
Re: (Score:2)
Leaking a user space program's own information can be a serious risk especially if that program can also execute arbitary code. A web browser is an example of such code. They have done a proof-of-concept where Javascript running on Chrome can leak information to a remote attacker information within Chrome's memory space. This could include sensitive information such as authentication tokens, private keys, the content of Chrome's password manager, etc.
The problem: arbitrary code (Score:2)
Leaking a user space program's own information can be a serious risk especially if that program can also execute arbitary code.
Yes, but although the fact that checks (e.g.: array limit checks done in software) don't work perfectly is a problem per se, the fact that YOU ARE RUNNING ARBITRARY CODE in the first place is the main problem here.
In other words, using rust is a nice thing, but it doesn't stop you from writing stupid code in the first place.
(to play with all the usual "rust-troll" that come screaming for out-of-bound checks whenever there's memory overflow exploit mentioned)
A web browser is an example of such code. They have done a proof-of-concept where Javascript running on Chrome can leak information to a remote attacker information within Chrome's memory space. This could include sensitive information such as authentication tokens, private keys, the content of Chrome's password manager, etc.
This is the main reasons why there's been efforts
The Intel Management Engine will save us (Score:2)
The vulnerabilities discovered in the Intel CPUs will never be exploited, as the Intel Management Engine already provides all the necessary backdoors.
Intel production delays (Score:2)
Language used is interesting... (Score:5, Interesting)
(1) There seems to be two [theregister.co.uk] separate [theregister.co.uk] exploits [theregister.co.uk] which you need to dig into the reporting to work. The Register's coverage is quite good and explains it all. "MELTDOWN" seems to be the more problematic one, and affects Intel and ARM chips. "SPECTRE" seems less problematic and affects AMD chips as well.
(2) AMD affected or not? Google says yeah, AMD says nay. However the wording from the LKML list is that "AMD processors are not subject to the types of attacks that the kernel page table isolation feature protects against" [lkml.org]. I think this references that the kernel patch is targeted against MELTDOWN, which does not affect AMD chips (see point 1)
(3) Although everyone's kicking Intel down, the main problem is that no-one can really trust each other now. I know there is a claim of "defective by design", but a lot of things can be described that way if they aren't used in their intended manner. In a "sane" world there would be no malicious actors trying to exploit what seems like quite a clever trick relying on timings (not a chip designer/expert). I read a lot of issues with the web came about, due to the fact that when it was designed everyone on the internet trusted each other, so security against bad apples wasn't designed in. As things have been commercialised you can see the effects, to the point that the only sane way to browse is using ad blockers and no script.
My thoughts on people suing Intel are a bit conflicted. Probably based on US law they would lose, but my analogy is like blaming (insert car manufacturer here) for selling you a car which crashes only when someone throws stones at it. We need stronger laws and protections against the rise in hostile actors.
(4) It's interesting that the Google blog post couldn't wait for the embargo-ed deadline of 9th January. They and their customers must have been getting really spooked. I suspect that this was being worked on and known by multiple parties, and a bit of coordination would have been good rather than panic.
(5) It'll be interesting to see what happens with regards to performance - from my understanding the SPECTRE variants just needs code recompilation. Most home workloads should not be affected by the two exploits, however I think if you are I/O heavy then it may be an issues.
Interesting time indeed.
Re: (Score:2)
(1) There seems to be two separate exploits which you need to dig into the reporting to work.
There are at least three separate exploits so far, so you didn't actually dig.
AMD affected or not? Google says yeah, AMD says nay.
AMD is affected, at least by 2/3 exploits, but mitigation will be cheap because of architectural differences.
Although everyone's kicking Intel down, the main problem is that no-one can really trust each other now.
AMD seems trustworthy.
I know there is a claim of "defective by design", but a lot of things can be described that way if they aren't used in their intended manner.
Intel is bad at branch prediction. Remember the P4? That's why that thing sucked. They're still bad at it, so they took shortcuts, which turned out to come back to bite them — and their customers. And here I am just using AMD processors, but that's none of my business.
In a "sane" world there would be no malicious actors trying to exploit what seems like quite a clever trick relying on timings (not a chip designer/expert).
In that case, we don't live
Re: (Score:3)
And based on UK law they will probably win, because they actually have a standard of fitness for purpose. If you bought it on the premise that you would have a certain level of performance and now you won't, you should be able to return it.
Care to lay odds on Intel actually being ruled against? 2:1? 3:1?
If it weren't for the fact that guilt is immaterial compared to PR so this will never get ruled upon, and Intel will settle for something that enriches a few lawyers and gives owners a $0.50 discount on th
Re: (Score:2)
Care to lay odds on Intel actually being ruled against? 2:1? 3:1?
No, but I still think they might actually be punished overseas. They are seen as a US company first and an Israeli company second, and everything else somewhere much further down the line. No one outside of one of those two nations will hesitate to insist that Intel actually make good without big, big bags of money.
Re: (Score:2)
Re: (Score:2)
Problem and workarounds (Score:5, Interesting)
There are three different attacks in the blog post [blogspot.com.es] by Google's Security team. The first one, for example, works as follows: it loads from a kernel memory address; this will generate an exception, but before the exception is generated (because the page permission check is delayed to improve performance) the subsequent instructions are executed speculatively. None of the following instructions will ever commit, but they can have a noticeable impact on the processor state, as follows: they speculatively execute a load, based on the contents of the position loaded from the kernel space. The load is issued (but not committed), what caches a given memory location. The specific location is based on one bit of the .
When the first load is detected to be illegal, the instructions in the pipeline are flushed, but (the following is the critical part) the cached address remains in L1. By timing a memory access to the corresponding address, they can infer one bit of the given kernel memory. By repeating this, they can subsequently infer the whole word, one bit at a time.
How can they solve this issue? I can only foresee two alternatives:
- Perform permission checks earlier in the pipeline, but this requires modifying the processor microarchitecture. AMD cores are not affected by this attack, so their uarch probably checks permissions before issuing the load.
- Completely or partially flush the contents of the cache after a processor miss-speculation. This is probably the solution being implemented in the patches being developed.
Note that miss-speculations are VERY frequent, since most of the execution of Out-of-Order processors is speculative to improve performance. This explains the VERY significant performance penalties caused by the patches.
Re: (Score:3)
No, they're caused because of the page table isolation. Resetting the page tables is a very expensive operation, which is why most OSes mapped kernel memory into the process space. This includes having to flush the page table caches (known as the Translation Lookaside Buffer, or TLB) far mor
Re: (Score:2)
As someone more thoroughly said, the KPTI patch doesn't make the behavior impossible, it just takes the teeth out of it by unmapping all the juicy things they would want.
Note that partially flushing the cache after a misprediction would certainly mitigate, but there would still be a window where the memory has been cached speculatively and the fault being detected. You instead would have to maintain a mask of cached-but-not-yet-valid-for-issue cache memory, and ignore it in specific scenarios until everyth
Re: (Score:2)
Perhaps this entire class of problem can be solved by not providing such accurate timers in the CPU or randomize them somehow, making them useless for measuring these kinds of tiny variations but still useful for measuring things on the order of microseconds.
What is the status of AMD (Score:2)
Branch Target Injection: Differences in AMD architecture mean there is a near zero risk of exploitation of this variant. Vulnerability to Variant 2 has not been demonstrated on AMD processors to date.
Near zero implies that it is possible! What are the differences, and why do they make it unlikely? could enhancements to the attack make it feasible?
Re: (Score:2)
https://arstechnica.com/gadget... [arstechnica.com]
has the Spectre news re "with proof-of-concept attacks being successful on AMD, ARM, and Intel systems"
Intel stock sold (Score:4, Informative)
It has also come to light that Intel CEO sold $24M in stock when he was aware of the issue.
http://www.businessinsider.com... [businessinsider.com]
Re: (Score:2)
Re: (Score:2)
Discovered Last Year (Score:3)
So, five days ago?
Re: (Score:2)
Reports say the info was sent to Intel, AMD, a few others (not all named) last June. So 6 months. Additional info was sent later, but the report didn't say what additional info, or when.
Last year? (Score:2)