Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Software Hardware

Scientists Propose Guaranteed Hypervisor Security 104

schliz writes "NCSU researchers are attempting to address today's 'blind trust' of virtualization with new security techniques that 'guarantee' malware does not infect hypervisors. Their HyperSafe software uses the write-protect bit on hypervisor hardware, as well as a technique called restricted pointer indexing, which characterizes the normal behavior of the system and prevents any deviation. A proof-of-concept prototype has been tested on BitVisor and Xen, in research that will be presented (PDF) at an IEEE conference today."
This discussion has been archived. No new comments can be posted.

Scientists Propose Guaranteed Hypervisor Security

Comments Filter:
  • Dangerous (Score:5, Insightful)

    by Nerdfest ( 867930 ) on Monday May 17, 2010 @08:23AM (#32235640)
    It's very dangerous to say "guaranteed" when it comes to security. It's very rarely true.
    • Re:Dangerous (Score:4, Interesting)

      by fuzzyfuzzyfungus ( 1223518 ) on Monday May 17, 2010 @08:29AM (#32235662) Journal
      Well, to be fair, CS is math, and can involve definite formal proofs, Now, once you compromise on hardware requirements(Due to a scarcity of Turing machines, $IDEAL_ALGORITHM has been ported to x86...) or have to produce software at the speed of programming rather than the speed of proof...
      • Re: (Score:3, Interesting)

        One thing that does seem curiously absent is how the NX bit helps you with DMA transfers. Ok, granted, you'd need to trick hardware other than the cpu into overwriting it, but given how much buggy hardware *cough* wireless broadcom chips for example *cough* there is in this imperfect world that isn't going to take all that long.

        So you'd need to forbid virtual machines from accessing any non-emulated hardware* (which I'd say is going to cost you in performance) and even then any mistake in the hypervisor's d

        • Re: (Score:3, Insightful)

          One thing that does seem curiously absent is how the NX bit helps you with DMA transfers. Ok, granted, you'd need to trick hardware other than the cpu into overwriting it, but given how much buggy hardware *cough* wireless broadcom chips for example *cough* there is in this imperfect world that isn't going to take all that long.

          So you'd need to forbid virtual machines from accessing any non-emulated hardware* (which I'd say is going to cost you in performance) and even then any mistake in the hypervisor's drivers for the real hardware will be fatal (the latest linux release needed about 6.3 megabytes to describe the driver changes done)

          * if you allow direct access to any device capable of DMA transfers, that will enable the VM to overwrite any memory it chooses

          Although I have some very grave reservations about the idea of "guaranteeing" the security of a hypervisor (or anything else on x86 for that matter) you're DMA example is incorrect assuming that you use the lastest processors that have an IOMMU.

          The real issue as the grandfather post points out is that you can provide a formal proof of any program the problem is that there is no formal proof of the correctness of any AMD or Intel CPU AFAIK.

        • Assumptions... (Score:3, Insightful)

          by DrYak ( 748999 )

          And an even bigger assumption :

          How does the über-secure hypervisor it-self know that it is running on the real hardware ? And is not simply stacked upon another layer of abstraction in the control of the malware ?

          • Run the cracking code in your hypervisor to see if you can break into yourself. If you can, then you are the real hypervisor, because malware would close the security hole intact once it's cracked into you.

          • by fbjon ( 692006 )
            Feel free to bootstrap a system from scratch if you need that level of paranoia. It's perfectly possible to do, and you only need to do it once.
      • by vidnet ( 580068 )

        As Donald Knuth once said, "Beware of bugs in the above code; I have only proved it correct, not tried it."

    • Re:Dangerous (Score:5, Insightful)

      by T Murphy ( 1054674 ) on Monday May 17, 2010 @08:34AM (#32235704) Journal
      Saying guaranteed is very dangerous for a corporation that will lose $$$ in sales should they be proven wrong. For researchers who are actually concerned about trying to make something that is guaranteed safe, using the word is great as it begs people to put them to the test. Better to be proven wrong quickly so they can get back to work, than to falsely believe it may truly be safe.
    • Guaranteed security: remove all power supplies, user inputs, and network connections, and melt all hard drives.
      • Guaranteed security: remove all power supplies, user inputs, and network connections, and melt all hard drives.

        You forgot:
        - Kill everyone involved.
        - Burn down all locations where the data was ever present.

        With correct definitions for "involved" and "present", you can guarantee security.

        • by bondsbw ( 888959 )

          With correct definitions for "involved" and "present", you can guarantee security.

          So what you mean is:

          - Kill everyone
          - Burn down all locations

          • Re: (Score:3, Funny)

            by Thanshin ( 1188877 )

            So what you mean is:

            - Kill everyone
            - Burn down all locations

            Guys. I've got someone here who knows about protocol ICU2. There's been a leak. Apply procedure K111 to subject and all related to the sixth degree.

        • Re: (Score:1, Funny)

          by Anonymous Coward

          Everyone knows you have to nuke it from orbit, it's the only way to be sure. And you call yourself a geek.

    • Re:Dangerous (Score:5, Insightful)

      by SharpFang ( 651121 ) on Monday May 17, 2010 @08:37AM (#32235734) Homepage Journal

      "Guaranteed" is a sound mathematical concept that works flawlessly in a mathematically perfect environment.
      It's not the algorithm that is usually compromised, it's the implementation. Like, the algorithm is based on strong randomness and none is assured, or the algorithm assumes a medium to be read-only while it is just write-protected in software and so on.

      • Like, the algorithm is based on strong randomness and none is assured

        There is no random.

        -The Universe

        • Chaos Theory Claims Otherwise.

          [randomness is not a bit field but a floating point value. No "Random/not Random" just "More random/Less random"]

          • The Universe is quantum.
            All things are deterministic.
            All things appearing to be "random" are simply not yet fully-understood.

            • Prove it.
              • quanta themselves are random.

                Take an atom of uranium. You know the half-life of the element. You know the exact probability the atom will break up in the next second. You have NO way of determining when it breaks up. It can be in a second or in a thousand years.

                Quantize this.

                • All things appearing to be "random" are simply not yet fully-understood.

                  • Still, "random" as in "inherently unpredictable, ever" is not necessary for these algorithms to function correctly. "unpredictable for a person willing to crack it" is perfectly sufficient.

            • by EdIII ( 1114411 )

              All things appearing to be "random" are simply not yet fully-understood.

              Not exactly.

              What Chaos Theory states is that the sensitivity of initial conditions in a system (Butterfly Effect) make long term prediction effectively impossible in spite of the deterministic nature of the system.

              That means, even if we fully understood a dynamic system, prediction would still be impossible if our ability to measure the system was not perfect.

              In that context, IMHO, randomness is ultimately an emergent property connected

              • By definition, if our "ability to measure the system" was not perfect, then we do NOT "fully [understood] a dynamic system".

                Granted, there is no way for something within a system to fully understand it. Even if presented with the full rules, you couldn't get exact measurements. Even if you got exact, instantaneous measurements and had instantaneous processing, you could never fully erase the effect of that measuring and processing, or the impact of having the information.

                There's a point where better predi

    • I'm still waiting for guaranteed bug-free hardware. I'm afraid there isn't any on the wider market. Long live simple RISC!
    • There is a very clear difference between a technique and implementation. Fortunately for researchers they are only interested in the technique. Most encryption techniques are near flawless, but are ruined by poor or limited implementation by the user. Not to mention there are usually assumptions that are impractical or inconsistent in real world conditions.
    • by mwvdlee ( 775178 )

      Does the license contain the usual "not liable for any damages due to this software not working as promised" clause, or do they REALLY guarantee it?

    • Re:Dangerous (Score:5, Insightful)

      by ircmaxell ( 1117387 ) on Monday May 17, 2010 @09:04AM (#32235960) Homepage
      Reminds me of the story of the Tortoise and the Crab from Gödel, Escher, Bach: An Eternal Golden Braid by Douglas R. Hofstadter. The Crab kept buying a "Perfect record player". One that could reproduce any sound possible. The Tortoise kept bringing over records that would induce harmonics and destroy the player. The conclusion drawn by Hofstadter was that if it's perfect, by the very nature of its perfection it can be destroyed by a record. In fact, all record players that reproduce a sound predictably can be destroyed by a record entitled "I Cannot Be Played on Record Player x". So that means that anything useful as a record player is vulnerable.

      I think you can draw the same analogy here. There's always a way to break any system, no matter how "secure" you make it. The key is does the record player actually play records (is the computer useful in computing)? You could make a perfectly secure computer, so long as you never turn it on. But by the very nature that it's running, it's vulnerable to SOMETHING. It's a byproduct of working with a complex system... An application of Gödel's incompleteness theorem proves that in any sufficiently powerful formal system, there's always a question that can break that system (or at least break it with respect to that system). So basically the only secure computer is one that's incapable of actual computation. Once it becomes useful, there will always be a way to break it...
      • Re: (Score:2, Insightful)

        by sexconker ( 1179573 )

        An application of Gödel's incompleteness theorem proves that in any sufficiently powerful formal system, there's always a question that can break that system (or at least break it with respect to that system). So basically the only secure computer is one that's incapable of actual computation. Once it becomes useful, there will always be a way to break it...

        Bullshit.

        Saying a perfect computer can't be secure because one of the things it can compute is how to break it's own security is absurd. You can simply define the computer as having limitations as to what it can do. To imply that such a computer is useless is to imply that all computers we have today are useless. All existing computers have physical and logical limitations.

        Saying that this is then not a "perfect" computer is also bullshit. You can always wrap your output. You can always spit out the do

        • The computer can still solve any problem you give it. It just won't execute it's own automatic suicide code.

          I'd suggest reading the book. He tackles this problem quite easily. There are an infinite number of possible "suicide codes". And due to the incompleteness theorem (among others), the computer cannot possibly know OR FIGURE OUT if a particular code is bad. Besides, it's impossible for a computer to know 100% of the outcomes without actually executing the code (see: Halting Problem [wikipedia.org]). So no, it c

          • There are an infinite number of strings containing a specific pattern. A computer can't know all strings that contain that pattern, but it can analyze any pattern to see whether it contains that string.

            And yes, a computer CAN evaluate code without executing it. It could just execute it in a VM, simulating itself. Derp!

            It is not impossible to build a secure system. You define secure behavior, and you build a system that implements it. Many digital and real-world systems are secure.

            They are limited in wh

            • by ganhawk ( 703420 )

              And yes, a computer CAN evaluate code without executing it. It could just execute it in a VM, simulating itself. Derp!

              A computer cannot fully simulate itself. A computer with b bits of memory can go through 2^b states. The machine it simulates has less than 2^b states. If a computer can simulate itself perfectly, then the halting problem is solved.

              • It's solved then.

                It goes like this, you have a physical server that is dual processor system with 2Gs of ram. You create a VM with a single proc and 1G of ram. When code needs to be tested it creates a cloned VM of itself using the additional proc and ram, runs the code, tests the output then destroys the VM.

            • So Kurt Gödel, Douglas Hofstadter, and Alan Turing are retarded.
              True, the word "useless" is not entirely correct, "incapable acting as a Turing machine" or "incapable of performing an arbitrary sequence of calculations with the provided operators" are more correct. You can't use a SQL-injection exploit on an abacus. It just won't work. You also can't serve a web-page with an abacus. While some systems can be designed such that they don't need any "advanced" functionality the level at which one encoun
              • So a perfect Turing machine is insecure. And a secure Turing machine is incapable of acting as a Turing machine.

                There's also a difference between "can't tell if a particular piece of code is harmful without executing it" and "can't tell if an arbitrary input of well-formed code is harmful without executing it."

                Actually, no there isn't. Code is input.
                And your example involving anti-virus software makes no sense, has no relevance, and is wrong.

                Anti-virus software works because of this: It contains a (finite) list of (particular) patterns that are known to be bad, selected from the (infinite, arbitrary) set of bad patterns. As long as the finite list is equal to or larger than the finite list of actual viruses in the wild the computer can't get infected.

                Anti-virus software contains a finite list of exact patterns and an infinitely-applicable finite list of heuristics.
                Even if the finite list of rules was larger than the list of actual viruses, you could get infected if the list of rules did not form a super set of

          • by TheLink ( 130905 )
            You don't bother figuring out whether something is malicious or not, that's harder than solving the halting problem (since you do not know the full inputs and full program description).

            What you do: you workaround the halting problem by forcing the program to stop anyway.

            Example:
            1) having the operating system force the program to halt if it's still running after X seconds.
            2) having the program state up front the maximum time "T" it will want to run for, and have the operating system force the program to halt
      • Re: (Score:3, Interesting)

        by franl ( 50139 )

        The world's shortest explaination of Godel's Incompleteness Theorem by Raymond Smullyan.

        We have some sort of machine that prints out statements in some sort of language. It need not be a statement-printing machine exactly; it could be some sort of technique for taking statements and deciding if they are true. But lets think of it as a machine that prints out statements. In particular, some of the statements that the machine might (or might not) print look like these:

        P*x (which means that the machine w

        • Basically, it is a consequence of "this statement is false"?
        • What would be funny is if we eventually discover that yes, technically there are statements that are true but cannot be printed, but in reality, there is only one such statement, "NPR*NPR*".

          This is why the incompleteness theorems don't give me a feeling of helplessness, as they seem to do to other people. Yes, you found an example which shows theoretical incompleteness. But can we construct OTHER statements that are also true but unprintable? If not, then there's no reason to point to the incompleteness the

          • What would be funny is if we eventually discover that yes, technically there are statements that are true but cannot be printed, but in reality, there is only one such statement, "NPR*NPR*"

            Actually, it's proven that there are an infinite number of them. Here's how it works. Let's call the language introduced in the GP post "SPL" (Statement Printing Language). So, we know that NRP*NRP* is the problem statement in that language. So let's add an axiom to SPL and call it SPL2. Here's the axiom:

            • NPR*NPR*
      • That’s a false extension of the original story. You can’t extend it like that. You misunderstood the meaning of the original story.

      • I think you can draw the same analogy here.

        Yeah, but don't - it's fucking terrible.

    • Re: (Score:3, Informative)

      by smallfries ( 601545 )

      It's an interesting technique, but it is not a guarantee.

      The summary doesn't mention the number of assumptions that the researchers make:
      + A working TPM module
      + An adversary limited to memory corruption
      + No unknown faults in the underlying system that can be exploited.

      Also the second technique (restricter pointer indexing) relies on performing a static analysis of the target hypervisor and rewriting it into a suitable form. This is not guaranteed to terminate, let alone guaranteed to work, although it does

      • Agreed on all three points especially point number two which is not an original design at all. It's a state machine with yet another name attached to it (again, sigh), something I've been using as a design technique for over a quarter century now (just over half my life!). That was a major point of irritation here, acting as if it was something new. The minor nit, made almost major by repetition was the use of indexes, where indices is the proper term, however I've become resigned to that of late in acad
        • I've started using a similar technique myself. Although a windows partition on Bootcamp isn't really a virtual machine it assumes that attacking the Mac partition (which isn't mounted by the windows partition) is a small enough target that malware won't hit it.

          The checkpointing / rollback is handled by Winclone that just nukes the relevant partition and updates it to which-ever checkpoint was selected. It seems to work quite well and I haven't had any problems yet when installing questionable software and n

          • The original purpose here was to to give 'hacking' (actually cracking to use the correct term) the (AD, DNS, whatever) server limited viability. Once a cracked server was identified, simply restore from an earlier snapshot that predates the crack, patch or otherwise mitigate the vulnerability, and 'drive on'. This would have been especially useful during the days of DNS exploit of the week not so long ago and still seems attractive with the 'China syndrome' we're seeing now (which the press still doesn't
    • by Lumpy ( 12016 )

      I Guarentee I can make a OS that is not infectable by malware.

      have a PROM made of the OS and run it from there. The only way for it to get infected is to copy it to ram and modify it and then fire a JMP to the ram location for running the new infected code.

      Make the pc not capable of running software in RAM and you just made it impossible to infect.

      Useability may suffer a tiny bit, but I think customers will be happy with powering off and swapping cartridges to do different things. A cartridge rack could a

    • Okay so One can protect the hypervisor execution. How do we protect the OS and the software the hypervisor's software storage?

      There has to be a way to update the hypervisor, and presumably that update comes over the web. You can guarantee the that code will execute in a protected space but can you guarantee you are executing the right code or that the code itself does not have a security hole.

      The there is the OS. Presumably this can still be infected. Also presumably some attacks will run in a layer be

    • I found mechanical Write Protect switches that prevent any writes a good security measure. Unfortunately in the world of flashable memory, this leaves many boot items that used to be in ROM open to attack. For any hypervisor, there should be a hardware jumper or switch that write protects it from any writes.

  • pdf? (Score:4, Insightful)

    by Cmdr-Absurd ( 780125 ) on Monday May 17, 2010 @08:23AM (#32235644)
    Link to a pdf version of the paper? Given recent security problems with that format, does anyone else find it funny?
    • Re: (Score:3, Insightful)

      Seems perfectly reasonable to me. Who would care more about provable hypervisor security than somebody with a badly infected guest?
    • most pdf issues have to do with the reader and not with the format itself. Not saying that pdf is perfect, but it would be unfair to put Sumatra, Foxit, and Acrobat Reader on the same "pdf" boat.
    • by Yvan256 ( 722131 )

      I don't find it funny because not all PDF readers have the same security flaws as the Adobe Reader. Mac OS X comes with a built-in PDF viewer/printer, so why would I want to install anything from Adobe on my computer?

      • by tepples ( 727027 )

        I don't find it funny because not all PDF readers have the same security flaws as the Adobe Reader.

        That's not always true. Sometimes, Adobe and Foxit both correctly implement a PDF feature that was poorly designed, and they end up having the same vulnerability [slashdot.org] because of it.

        Mac OS X comes with a built-in PDF viewer/printer, so why would I want to install anything from Adobe on my computer?

        Because GIMP isn't enough, nor are the meager open-source SWF builders. And Wikipedia says Preview does not allow filling in PDF forms.

    • Here's a safe version of the paper: paper.none (0 bytes)
  • ...research that will be presented (PDF)...

    I wish that I had Hypersafe installed so I could open Acrobat on a virtual machine instead.

  • If it really guarantees no infection with malware, then it cannot be update-able or extendible. All it is suggesting is that the hypervisor cannot be altered from within a client operating system. I don't think that this gives you anything that you don't already get with a user-mode virtualisation like virtual box, where the host's system will write-protect pages.
  • by NotSoHeavyD3 ( 1400425 ) on Monday May 17, 2010 @08:33AM (#32235694) Journal
    Because if anybody could get a machine infected it'd be him.
  • The more you tighten your grip, the more will slip through your fingers.

  • What about securing a VM from the host? so you can run secure corporate VM images on an untrusted host. now that would interest me...

    Dave

    • by Spad ( 470073 )

      While you're at it, I'd like a pony...

    • Wait wait... so you want the hypervisor, the thing that's granting access to the various hardware resources and has direct access to virtualized memory, storage, and so forth, by it's very nature... to be untrusted?

      Am I the only one that sees a contradiction, here?

      • by fyonn ( 115426 )

        I'd like to be able to run a secure VM with a level of assurance that it can't be interfered with from the host upon which it's running. this task may well be impossible, but there's certainly call for it. the classic example being running a corporate VM for access to work, on a member of staff's own computer. the company would not trust that computer, but would want to be able to trust the image. They would want to know that any malware on the host could not affect the VM.

        I'm askin if any of the modern tec

        • You mean like running a custom-built Live distro with the apps you need built in? Not exactly what you said, but the same effect. That's what we've got access to at work for remote access over VPN on other hardware.

          • by fyonn ( 115426 )

            I was thinking a windows build with appropriate apps and VPN access...

            *if* it can be secured...

            dave

    • by mlts ( 1038732 ) *

      This is the same battle as DRM fights. Who has control of the host can dump memory images of the VMs at will.

      Not like this can't be done, where the VMs are protected from the host. Look how well the PS3 has kept its security without a solid breach, and when it was breached, it was fixed by a ROM update in record time.

  • What about the evil bit [faqs.org]?

    • by leuk_he ( 194174 )

      Sorry, the evil bit is a bad idea. Because to set it you have to write it. and since all executable memory is write protected there is no way to tell the hypervisor of your bad intentions.

      The only workarround now is that you cannot do evil updates. But evil updates need a reboot..unless....... with HA options you can move a running VM to an other server, update and infect it with an evil update, reboot, and move the VM back, without the VM ever knowing the host was changed.

      Don't you love being evil?

      "chmod +

  • Fill your server full of concrete and chuck it into an active volcano.

    Otherwise, there's just varying degrees of risk.

    • Re: (Score:2, Insightful)

      by LordBmore ( 1794002 )
      Okay, I filled all of my servers with concrete and tossed them into the volcano. What next? I can't wait to tell my boss how secure we are.
  • While this sounds like a step in the right direction, any claims of "unhackability", are frankly lies and both unethical and unprofessional in the extreme. Most currently used attacks were never expected and quite a few were regarded as impossible before somebody went ahead and demonstrated them.

    On a related note, those technologies advertised as "unhackable", "absolutely secure", "provably secure", etc. consistently fail to deliver. In fact, these claims are usually an indicator of low quality, because the

    • by CAIMLAS ( 41445 )

      On a related note, those technologies advertised as "unhackable", "absolutely secure", "provably secure", etc. consistently fail to deliver.

      You must be familiar with SonicWall, then.

  • Why is it no one told these guys that adding more features never adds more security?

    Lets go over the x86 history.

    Start multitasking, need some sort of memory protection in HARDWARE cause software can't do it.

    Realize that software implementations working with the hardware are buggy ... Damn.

    Add other protections such as NX ... realize software implementations are buggy ... Damn.

    Virtualize the OS into its own little space under a hypervisor, realize its slow and implementations are buggy ... Damn.

    Add a hyper

    • Well there you go, x86 legacy instruction sets, yet another reason to virtualize your Arduino! If you layer enough software (e.g. http://www.multiplo.org/duinos/wiki/index.php?title=Main_Page [multiplo.org]) on top of your project and eventually we will make it secure. Heck, just add a TPM shield, a few million in research grants, and even more libraries and eventually it will be so much safer to use. </sarcasm>

      If someone thinks that adding software is going to do much for security in the long run, then go no furt

  • If they REALLY make a true firewall around the hypervisor then performance will be terrible.

    If you want decent network or display performance on a vm then you have to use special drivers for the virtual devices that bypass the firewall.

    We have already seen security flaws in these special drivers.

"If it ain't broke, don't fix it." - Bert Lantz

Working...