Scientists Propose Guaranteed Hypervisor Security 104
schliz writes "NCSU researchers are attempting to address today's 'blind trust' of virtualization with new security techniques that 'guarantee' malware does not infect hypervisors. Their HyperSafe software uses the write-protect bit on hypervisor hardware, as well as a technique called restricted pointer indexing, which characterizes the normal behavior of the system and prevents any deviation. A proof-of-concept prototype has been tested on BitVisor and Xen, in research that will be presented (PDF) at an IEEE conference today."
Dangerous (Score:5, Insightful)
pdf? (Score:4, Insightful)
Re:pdf? (Score:3, Insightful)
Re:Dangerous (Score:5, Insightful)
Re:Dangerous (Score:5, Insightful)
"Guaranteed" is a sound mathematical concept that works flawlessly in a mathematically perfect environment.
It's not the algorithm that is usually compromised, it's the implementation. Like, the algorithm is based on strong randomness and none is assured, or the algorithm assumes a medium to be read-only while it is just write-protected in software and so on.
Re:Dangerous (Score:5, Insightful)
I think you can draw the same analogy here. There's always a way to break any system, no matter how "secure" you make it. The key is does the record player actually play records (is the computer useful in computing)? You could make a perfectly secure computer, so long as you never turn it on. But by the very nature that it's running, it's vulnerable to SOMETHING. It's a byproduct of working with a complex system... An application of Gödel's incompleteness theorem proves that in any sufficiently powerful formal system, there's always a question that can break that system (or at least break it with respect to that system). So basically the only secure computer is one that's incapable of actual computation. Once it becomes useful, there will always be a way to break it...
Re:Dangerous (Score:3, Insightful)
One thing that does seem curiously absent is how the NX bit helps you with DMA transfers. Ok, granted, you'd need to trick hardware other than the cpu into overwriting it, but given how much buggy hardware *cough* wireless broadcom chips for example *cough* there is in this imperfect world that isn't going to take all that long.
So you'd need to forbid virtual machines from accessing any non-emulated hardware* (which I'd say is going to cost you in performance) and even then any mistake in the hypervisor's drivers for the real hardware will be fatal (the latest linux release needed about 6.3 megabytes to describe the driver changes done)
* if you allow direct access to any device capable of DMA transfers, that will enable the VM to overwrite any memory it chooses
Although I have some very grave reservations about the idea of "guaranteeing" the security of a hypervisor (or anything else on x86 for that matter) you're DMA example is incorrect assuming that you use the lastest processors that have an IOMMU.
The real issue as the grandfather post points out is that you can provide a formal proof of any program the problem is that there is no formal proof of the correctness of any AMD or Intel CPU AFAIK.
Re:Dangerous (Score:4, Insightful)
And I've seen woodworm...
Re:Want guaranteed security? (Score:2, Insightful)
Assumptions... (Score:3, Insightful)
And an even bigger assumption :
How does the über-secure hypervisor it-self know that it is running on the real hardware ? And is not simply stacked upon another layer of abstraction in the control of the malware ?
Re:Dangerous (Score:2, Insightful)
An application of Gödel's incompleteness theorem proves that in any sufficiently powerful formal system, there's always a question that can break that system (or at least break it with respect to that system). So basically the only secure computer is one that's incapable of actual computation. Once it becomes useful, there will always be a way to break it...
Bullshit.
Saying a perfect computer can't be secure because one of the things it can compute is how to break it's own security is absurd. You can simply define the computer as having limitations as to what it can do. To imply that such a computer is useless is to imply that all computers we have today are useless. All existing computers have physical and logical limitations.
Saying that this is then not a "perfect" computer is also bullshit. You can always wrap your output. You can always spit out the doomsday code instead of executing it. You can always escape your special characters.
The computer can still solve any problem you give it. It just won't execute it's own automatic suicide code. You can make one that does execute said code, but requires the user to confirm. You can make one that does execute said code, automatically. It all depends on how you want it to behave.
Defining a system that behaves in a certain way, then trying to get it to break that behavior is simply retarded. It's the nerd version of "Can God make a boulder so big he himself couldn't lift it?".
There are zero real-world implications of this "thinking" exercise, regardless or which end you look at it from, any conclusions you draw, etc.