Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Security IT Linux

Researcher Releases Hardened OS "Qubes"; Xen Hits 4.0 129

Trailrunner7 writes "Joanna Rutkowska, a security researcher known for her work on virtualization security and low-level rootkits, has released a new open-source operating system meant to provide isolation of the OS's components for better security. The OS, called Qubes, is based on Xen, X and Linux, and is in a basic, alpha stage right now. Qubes relies on virtualization to separate applications running on the OS and also places many of the system-level components in sandboxes to prevent them from affecting each other. 'Qubes lets the user define many security domains implemented as lightweight virtual machines (VMs), or 'AppVMs.' E.g. users can have 'personal,' 'work,' 'shopping,' 'bank,' and 'random' AppVMs and can use the applications from within those VMs just like if they were executing on the local machine, but at the same time they are well isolated from each other. Qubes supports secure copy-and-paste and file sharing between the AppVMs, of course.'" Xen's also just reached 4.0; some details below.
Dominik Holling writes "With a small announcement on their mailing list, the open source community hypervisor Xen has reached the official release of version 4.0.0 today. The new features are: 'blktap2 (VHD support, snapshot discs, ...), Remus live checkpointing and fault tolerance, page sharing and page-to-disc for HVM guests, Transcendent memory (http://oss.oracle.com/projects/tmem/).' A complete list of all changes can be found on the Xen wiki and the source can be found on the official website and the Xen Mercurial repositories."
This discussion has been archived. No new comments can be posted.

Researcher Releases Hardened OS "Qubes"; Xen Hits 4.0

Comments Filter:
  • I wonder how it will be when it hits stable, and what support it will have for devices

  • by rsborg ( 111459 ) on Wednesday April 07, 2010 @01:23PM (#31764430) Homepage
    A document that's infected would still need to be opened, and thus presents a vector that needs to be scanned against. Given the recent PDF exploit issues, I think this is still an large attack profile... still necessitating virus scanners (and app firewalls).

    Still, this is still a great advancement... will be interesting to see what performance impact this has.

    • Re: (Score:1, Interesting)

      by sopssa ( 1498795 ) *

      Sure, but nothing works against every threat. You will never discover a single perfect defense. It doesn't mean its useless to harden the single parts, like in this case OS components to keep rootkits away.

      It's dead easy to make a rootkit for the existing operating systems. In Windows you need to hook the system API's. In Linux it's even easier - just replace the system executables. You even have a source code for them, making it really easy to add a simple code that checks if file/process/etc has certain t

      • Re: (Score:2, Informative)

        by BitZtream ( 692029 )

        Its pretty easy to make a rootkit for any PC based OS ... the real problem is getting it loaded before the main OS. Contrary to popular belief, even with the advent of hardware virtualization helpers, boot viruses that hid themself away from the main OS are nothing new and have been around probably longer than you've owned your own computer.

        The rootkit simply has to be first, after that theres nothing anyone can do.

      • In Linux it's even easier - just replace the system executables.

        Wasn't there a best-practice that put /sbin, /bin, /usr, and all the other executable files on a separate partition that was mounted read-only? You pretty much left /home, /etc, and /var RW?

        I guess if you wanted to install updates, you could remount RW or (safer) reboot into single-user mode, mount RW and install...

    • Re: (Score:2, Interesting)

      by 0ptix ( 649734 )

      I guess the idea is to run adobe PDF in it's own VM that has minimal permissions. Then when it is infected the virus can't access anything the VM isn't allowd to. (grsec let's you do something like that with the ACLs, roles and users.) For example if you do updates manually then you can restrict the VM of adobe Reader to not have net access. so the virus couldn't contact the outside world either. it's all a question of how much you're willing to compartmentalize your system. the more hassel you are willing

      • by Jahava ( 946858 ) on Wednesday April 07, 2010 @02:00PM (#31764882)

        I think the idea is that you'd run different domains to protect different sets of files. You'd run your tax software in a "tax" domain, and if any PDF software got infected, it wouldn't be able to touch the "tax" domain information.

        Versus locked-down operating systems, you have a valid point (and my personal issue with this approach). However, it's not without its advantages. In a standard Linux system, every userspace process has access to around 330 system calls. Each one of these is an interface into the kernel, and a bug in even one of them is enough to take over the kernel. Furthermore, any application that can load kernel modules can potentially dominate the kernel.

        In the Qubes system, each domain is protected by a virtualization layer. It does have domainhypervisor interfaces (similar to system calls) to allow I/O, graphics, and the copy-paste subsystem to run, but there are a lot fewer of them. They are oriented around a finite functionality - the aforemented I/O, graphics, etc., while system calls must exist for all userspace functionality. Therefore, as userspace applications get more complex and system calls (per-domain) increase in number and complexity, the domainhypervisor interface will be more or less static. This hopefully leads to them being easier to secure and lock down.

    • by Zerth ( 26112 )

      It does if you revert the VM after you are done. Nothing gets saved unless the infectious agent can break out of the VM. At worst, it'll send some spam if you allow the document reader VM net access.

    • by 99BottlesOfBeerInMyF ( 813746 ) on Wednesday April 07, 2010 @01:53PM (#31764798)

      A document that's infected would still need to be opened, and thus presents a vector that needs to be scanned against.

      If the PDF viewer is running in a separate VM container, however, what exactly do you think it's going to accomplish? Read your other PDFs? sure. Delete them even? Okay. But since you probably did not give that VM access to your network it's unlikely to be able to do anything actually beneficial to a malware writer.

      ...still necessitating virus scanners (and app firewalls).

      Well, virus scanners are a bonus, although not a lot of use on Linux given the amount of malware out there. Configuration of VMs takes over a lot of the same task as application level firewalls here, although the overhead tradeoffs of each approach should be looked at.

      • Re: (Score:1, Troll)

        "If the PDF viewer is running in a separate VM container, however, what exactly do you think it's going to accomplish?"

        Well, provided that "...Qubes supports secure copy-and-paste and file sharing between the AppVMs" I'll let it opened to your own imagination.

        • Well, provided that "...Qubes supports secure copy-and-paste and file sharing between the AppVMs" I'll let it opened to your own imagination.

          One narrowly defined point of access to the other VMs is orders of magnitude easier to secure than the way it works now. It's the security equivalent of putting all your eggs in one basket and then watching that basket really, really closely.

      • I'd rather have my machine be a spambot for an hour than lose my PDFs.

        Worst case scenario for being a spambot is that I get taken off the network for a few minutes. My PDFs? Maybe it's an archive of patents I need to review, the instructions for the fromulator, or my master's thesis. Certainly, they should all be backed up, and we should all test our *restore* from backup. That's the only real security, but I don't want deletion to happen lightly.

        In general, do apps really need delete permission anyway

        • In general, do apps really need delete permission anyway? How about just giving them change permission. In other words, something like a local svn commit.

          Agreed. And Qubes does provide the ability to revert an entire VM, so even if your PDFs were all deleted or corrupted, it should not be permanent if you catch it in time.

          I think Macs come with something like that built in... the name escapes me.

          Macs do have a nice, built in versioning called "Time Machine" but most users do not buy the required external drive to make use of it.

      • by HiThere ( 15173 )

        Why would you give a PDF reader the ability to modify or delete files on the disk? If you're running it in a virtual machine that's designed for security, I'd think that would be one of the things that would be prevented. Have it think of the disk as a really fast CD.

        • Why would you give a PDF reader the ability to modify or delete files on the disk? If you're running it in a virtual machine that's designed for security, I'd think that would be one of the things that would be prevented. Have it think of the disk as a really fast CD.

          While I agree that seems a reasonable configuration provided it is easy for the user to get around (For PDF forms and PDFs I want to annotate), I don't think that is a normal setup for Qubes, the technology we're discussing. Rather, it lets you copy and paste files between VMs, so the workflow would probably look more like, downloading them in one VM, then copying them to your PDF reader's VM. Normal users would probably delete them from the former to save space. Now if your PDF reader reacted badly to a PD

    • Re: (Score:3, Insightful)

      by Fred_A ( 10934 )

      >Still, this is still a great advancement... will be interesting to see what performance impact this has.

      Current machines (with the possible exception of so called "netbooks") are so insanely fast that the performance impact of a virtualised environment doesn't matter much save for a few very specific applications : games, graphic processing, etc. Not what typical users require. And there are ways to lower the impact when running a high requirement application. It will require a bit more RAM (if even that), but current machines are certainly adequate CPU-wise.

      This is IMO one potential direction that OS archite

    • will be interesting to see what performance impact this has.

      Performance impact?... performance improvement!

      Now virus scanners can target specific scanning methods to specific VMs! Oh sure, there's some VM overhead - but think of the efficiency other software (like firewalls and virus scanners) gain by having everything segmented like this?

  • Hardware sandboxing (Score:3, Interesting)

    by EdZ ( 755139 ) on Wednesday April 07, 2010 @01:25PM (#31764450)
    With the proliferation of multi-core CPUs and GPU clustering, I wonder how long until VMs simply become entirely separate physical systems sitting on your motherboard.
    • I just hope they have working NUMA support and have jumbo frames fixed in the network bridging in Xen 4.0.

    • With the proliferation of multi-core CPUs and GPU clustering, I wonder how long until VMs simply become entirely separate physical systems sitting on your motherboard.

      Yeah, I bet we could really accelerate the performance of these virtual systems if we could run them on dedicated hardware.

    • Re: (Score:3, Funny)

      by kgo ( 1741558 )

      Then they'd just be M's... ;-)

      • Indeed. Circular logic is just surprising sometimes.

        "Hey guys - wouldn't it be awesome if we setup the VM's so that each one of them had their own dedicated hardware!"

        About as bright as the time one of our web guys decided to use DNS to assign all his servers a name based on their serial number - and then started asking if there was any way to assign a name to each one that was easier to remember.

        • by tepples ( 727027 )

          "Hey guys - wouldn't it be awesome if we setup the VM's so that each one of them had their own dedicated hardware!"

          A blade server can do that.

          About as bright as the time one of our web guys decided to use DNS to assign all his servers a name based on their serial number - and then started asking if there was any way to assign a name to each one that was easier to remember.

          I believe that's called a CNAME.

    • Re: (Score:2, Funny)

      by meatpan ( 931043 )
      This will probably happen sometime in the 1970's with the IBM's z-series. Also, in the 1990's Sun might introduce their own hardware isolation through a product called Dynamic System Domains. It's hard to tell though. I think the future is going to be rough for Sun.
    • Have a look at Sun's ideas on Zones and Domains. We're currently running several M-5000s.

      Oh, and waaaaay before that, there were mainframes.
  • Sandboxes and Jails (Score:3, Interesting)

    by bhima ( 46039 ) * <Bhima...Pandava@@@gmail...com> on Wednesday April 07, 2010 @01:26PM (#31764460) Journal

    I'd like to read a serious comparison between this and jails in FreeBSD and sandboxes in Mac OS.

    I think a lot of these ideas have been around for a very long time but they are such a pain in the ass, very few people actually use them.

    • by 0123456 ( 636235 )

      Sounds very similar to the security level capabilities in OpenSolaris too.

    • by vlm ( 69642 )

      I think a lot of these ideas have been around for a very long time

      Ya think so? At least since 1972? And VM is still in active use and under development?

      http://en.wikipedia.org/wiki/VM_(operating_system) [wikipedia.org]

      Of course you have to violate 104 IBM patents to run it on an emulator, but still...

    • I think a lot of these ideas have been around for a very long time but they are such a pain in the ass, very few people actually use them.

      I use FreeBSD jails all the time. Want a fresh, new environment for testing? ezjail-admin create testenvironment.example.com 10.20.1.5, ssh in, and start working on it. My understanding is that you're limited in practice to several tens of thousands of jails per machine, but I haven't bumped up against that yet.

  • Won't work (Score:1, Insightful)

    by jmorris42 ( 1458 ) *

    This idea is an example of failing to understand the problem.

    The problem with security comes from several primary sources:

    1. Complexity. Too many layers with poorly understood security implications. This lady might actually understand the monster she spawned but no admin trying to implement it will understand all of the corner cases. See SELinux.

    2. Shoddy coding. So this gets tossed over the wall and will (assuming it is to matter) be completed by people who don't really understand it. Unless this on

    • Re:Won't work (Score:5, Insightful)

      by Archangel Michael ( 180766 ) on Wednesday April 07, 2010 @01:53PM (#31764796) Journal

      1) Any system simple enough that anyone can use it, is either a toaster, or won't be useful in any customized way.

      2) Coding doesn't need to be "shoddy" to be a security risk. It just simply needs to fail to realize the edge cases nobody thought of when writing the code. If you make the code complicated enough and run enough checks, it becomes complicated mess that nobody wants to use.

      The problem with security is one of optimizing the risk to the amount of protections built into the system. Back in DOS days, I'm sure that DOS was insecure from many many levels, however because it was standalone, the security of "networking" wasn't even considered.

      However the #1 security risk with computers isn't "code" or "Programs" or Hackers or whatever; the BIGGEST problem is Social Engineering, of which there is no fix other than "Stupid should hurt".

      When a web dialog box can mimic a system dialog box saying "Your Computer is Infected CLICK HERE to fix it", which downloads and installs Antivirus 2010 crapware, the problem isn't Firefox, Windows or anything any programmer can fix. PEBAC, PICNIC and 1D10T errors aren't fixable by programmers.

      And if you had to fix these problems you'd realize that Hackers and such are spending more time on social engineering attacks to get their viruses, trojans, and other malware onto computers than traditional methods.

      • by jmorris42 ( 1458 ) *

        > When a web dialog box can mimic a system dialog box saying "Your Computer is Infected CLICK HERE to fix it",
        > which downloads and installs Antivirus 2010 crapware, the problem isn't Firefox, Windows or anything any
        > programmer can fix. PEBAC, PICNIC and 1D10T errors aren't fixable by programmers.

        Yes it is. Firefox should never set the executable bit on a download. (On Windows I guess it could neuter executables with an extra extension?) I don't care how 'convienient' it might be, just don't all

        • Second, a properly configured Linux machine isn't subject to that sort of attack because we use signed packages.

          Oh really? From here [slashdot.org]:

          Pwnie for Mass 0wnage

          Awarded to the person who discovered the bug that resulted in the most widespread exploitation or affected the most users. Also known as ‘Pwnie for Breaking the Internet.’

          Red Hat Networks Backdoored OpenSSH Packages (CVE-2008-3844)
          Credit: unknown

          Shortly after Black Hat and Defcon last year, Red Hat noticed that not only had someone backdoored the OpenSSH packages that some of their mirrors were distributing, but managed to sign the packages with Red Hat's own private key. Instead of revoking the key and releasing all new packages, they instead just updated the backdoored packages with clean copies, still signed by the same key, and released a shell script to scan for the MD5 checksums of the affected packages. What makes this eligible for the "mass0wnage" award is that nobody is quite sure how many systems were compromised or what other keys and packages the attackers were able to access. With very little public information available, the real casuality was the public's trust in the integrity of Red Hat's packages.

          I suggest you make sure to read the bolded part a few times.

          • Oops the link got messed up. It's here [pwnie-awards.org].

          • by jmorris42 ( 1458 ) *

            You might want to read the details a bit more than the overly sensationalized version at pwnie-awards. None of the bogus packages have been seen in the wild. That incident was about as close of a near miss as you can get without a kaboom! but there was in fact no kaboom!. Will something like it happen eventually? Probably. Lt. Commander Susan Ivanova said it best, "Sooner or later, BOOM!" But it doesn't happen on a daily basis.

        • by tepples ( 727027 )

          Second, a properly configured Linux machine isn't subject to that sort of attack because we use signed packages.

          Signed by whom? Allow self-signed packages and malware authors can self-sign. Reject self-signed packages and you can't run anything that hasn't been packaged for your distro, such as new software written by a user of a different distro or proprietary commercial software.

        • by butlerm ( 3112 )

          None of those protections exist for Windows or Mac users which is why they can get 0wned with one bad click.

          Not true. On Windows XP (for example), if you run as a limited user, nothing you run can take over the system, because you don't have privileges to do so.

    • And this one adds the fact it doesn't even try to secure the apps, it tries to stop misbehaving apps (like SELinux) from accessing things it shouldn't

      Well, that's the point. If an app is sandboxed, it doesn't matter if that app is insecure... your OS won't get hosed by that app.

      If history shows anything, giving an attacker any access to run code locally gives them all they need to leverage it into root eventually.

      This is an implementation issue, not a theoretical issue. If the virtualized locations are tr

      • by jmorris42 ( 1458 ) *

        > Well, that's the point. If an app is sandboxed, it doesn't matter if that app is insecure... your OS won't get hosed by that app.

        Except history tells us it never works that way in the real world. Java? Nope. BSD Jails? Nope. Virtualization? Nope. Containers? Nope. All promised to build a perfect isolated subsystem that apps couldn't escape from. All have had exploits. It helps, but it can also harm by leading to a false sense of security that causes people to do things they never would have

        • Thanks for the fairly level-headed response, they seem to be few and far between on slashdot nowadays.

          EVERY sandboxing scheme attempted to date has failed, some more messy than others, some more publicly than others, but ALL have failed.

          Hmmm... I wasn't aware that virtualization was a security failure, and that every instance of VM implementation failed to maintain security of the host OS. Do you know where I might get some good reading on the subject?

          • by jmorris42 ( 1458 ) *

            > Hmmm... I wasn't aware that virtualization was a security failure, and that every instance of VM implementation failed to maintain security of the host OS.

            Google is yer friend. "security bug vmware" "security bug xen", etc. will quickly hit exploitable bugs that allow an attacker to escape. Meanwhile "security bug kvm" only gets crash bugs and potential escapes but then it is the new kid on the block. Not having any luck finding the bugs in the hardware, but pretty sure /. had something on it in the

        • Huh? I've heard of some corner case exploits on chroot'ed jails and the author of this system (Joanna) claimed some exploits on VM's which claim was controversial, IIRC. But in practice chrooted jails work pretty well and seem pretty secure if well implemented - I've been using them on FTP servers for years without incident, and they've passed numerous security reviews by our company's security team. VM's likewise seem to be pretty well isolated with either VMWare or Virtual Box. I'm not aware of any

    • Your post is an example of failing to understand information security.

      Security practitioners have accepted the fact that it is infeasible to ever expect that all applications be free of security holes. It is also unwise to insist on the fantastically-higher expense of using dedicated hardware rather than virtualization for most applications. Because of this, we have adopted a strategy of "defense in depth" whereby we layer multiple countermeasures to reduce the probability of successful exploitation.

      A secur

    • This adds another layer of security on top of Linux which makes it more secure than it would otherwise be.

      If history shows anything, giving an attacker any access to run code locally gives them all they need to leverage it into root eventually.

      Perhaps, but this is another layer they'll have to get through before they can root the machine. So now they have to exploit the program, gain access to the VM that it's running on, and then jump from that VM to the host OS. It's not a perfect solution and it's almost certainly not impenetrable, but that doesn't keep it from being a useful tool.

  • Singularity (Score:3, Interesting)

    by organgtool ( 966989 ) on Wednesday April 07, 2010 @01:34PM (#31764564)
    This approach seems similar to the one taken by Microsoft in their Singularity OS [wikipedia.org]. I wonder if the issue of context switching will become an issue and if it does, how will it be addressed.
  • Hardly a victory... (Score:5, Interesting)

    by Jahava ( 946858 ) on Wednesday April 07, 2010 @01:38PM (#31764634)

    Let me begin by saying that this sounds like a truly interesting approach to security. Virtualization technology defines very clear hardware-enforced boundaries between software domains. In the standard case, those domains contain different operating systems, each of which are provided privilege level-based sub-domains. In this particular case, each domain is dedicated to running sets of user-space applications, and the hardware boundary is used for userspace isolation, as opposed to virtual machine OS isolation.

    So my "home" domain is infected, but my intellectual property is in my "work" domain. The virtualization boundary means that a virus can get Ring 0 access and still not be able to touch those IP files. Hurray ... except wait. There must be an interface between the "home" domain and the hypervisor, else things like copy-and-paste and hardware resource arbitration can't work. Here's what some infection paths would look like:

    • Standard XP Install: (Firefox)
    • Standard Vista / Linux Install: Firefox -> (Kernel)
    • Qubes Install: Firefox -> Home Kernel -> (Hypervisor)

    Maybe the paths can be locked down better, but a vulnerability is a vulnerability. It's clearly harder for a virus to get full control, but that's just throwing another obstacle in the way. If one is bad, and two is better, maybe three is even better, but nothing's perfect. Why is the domain-to-hypervisor path considered any more secure than the userspace-to-kernel path? If it's not, you're just adding more complexity, which could mean more potential for vulnerabilities! If you're locking down privilege boundaries, kernels like FreeBSD (jails) and even userspace execution environments like Java (JVM) have been working on that for years.

    It's cool, but I doubt it will be game-changing.

    • Why is the domain-to-hypervisor path considered any more secure than the userspace-to-kernel path? If it's not, you're just adding more complexity, which could mean more potential for vulnerabilities!

      It's considered more secure in the same way that it's more secure to have a firewall instead of just trying to secure the applications. As long as they can secure the interfaces between the host OS and the VMs, they have security. If they don't secure that interface, then they're left with no more vulnerabilities than they had before.

      • Re: (Score:1, Insightful)

        by Anonymous Coward

        The hypervisor is simpler and therefore easier to audit for issues (or even prove correct).

  • Since KVM is in the mainline and redhat is now supporting that 100% this seems like a bad time to start anything on Xen.

    To which I say good riddance, virtualization is just another app and kvm gets this right.

    • Paravirtualization makes this the same thing with Xen. The difference is that Xen is the OS and Linux is the app.

      You may not like Xen as an OS, but it does have some very nice qualities that are hard to deliver on Linux alone.
    • Suse will also support KVM in SLES11 SP1 and expects [slideshare.net] that long term it will "become equivalent to Xen". Ubuntu and Fedora also support KVM. Xen doesn't care about what distros do (they don't care about getting all their code merged in mainline either), they seem to think that they can ignore what mainstream OSes do, just like VMWare. I suppose they will die some day, I'm not using third party software if I can get the same funcionality with the OS.

    • Since KVM is in the mainline and redhat is now supporting that 100% this seems like a bad time to start anything on Xen.

      To which I say good riddance, virtualization is just another app and kvm gets this right.

      They explain that in their FAQ and architecture PDF. One of the reason is that KVM doesn't allow the moving of (eg) networking and I/O drivers into separate unprivileged guests.

  • Remus (Score:3, Informative)

    by TheRaven64 ( 641858 ) on Wednesday April 07, 2010 @01:53PM (#31764800) Journal

    The Remus stuff in Xen is very cool. A couple of days ago there were some posts about HP's NonStop stuff as an example of something you could do with Itanium but not with commodity x86 crap. Remus means that you can. It builds on top of the live migration in Xen to keep two virtual machines exactly in sync.

    Computers are deterministic, so in theory you ought to be able to just start two VMs at the same time, give them the same input, and then see no difference between their state later on. It turns out that there are a few issues with this. The most obvious is ensuring that they really do get the same input. This means that they must handle the same interrupts, get the same packets from the network, and so on. Anything that is used as a source of entropy (e.g. the CPU's time stamp counter, jitter on interrupts, and so on) must be mirrored between the two VMs exactly. This was already possible with Marathon Technology's proprietary hypervisor on x86, but is now possible with Xen.

    As with the live migration, you can kill one of the VMs (and the physical machine it's running on) and not even drop network connections. This leads to some very shiny demos.

    Oh, and I should probably end this post with a gratuitous plug for my Xen internals book [amazon.co.uk]

    • This was already possible with Marathon Technology's proprietary hypervisor on x86, but is now possible with Xen.

      It was also possible with VMware's FT technology and I bet it's significantly easier to configure than whatever just got hacked into Xen.

      Maybe there are advantages to Xen's approach? Maybe it supports vSMP? I would have been more interested in hearing about that, than a plug for your book.

      • It's pretty easy to configure Remus. You just run a couple of commands to turn it on, although you need some other stuff (as you do with live migration) to make sure both VMs on different machines get the same network packets. Remus hasn't 'just got hacked in' to Xen. The first prototype was demonstrated at the XenSummit back in Spring 2007. It's had three years of testing and refactoring before being shipped as an official part of the hypervisor (it's been publicly available as a third-party patch set

  • Looks like a nice approach to program isolation. A system which was in some ways similar to Qubes was developed for Windows known as WindowBox. My research takes another approach, program restriction. Systems such as SELinux and AppArmor allow precise policies to define the types of actions and resources which are made available to each application. However, the finer the granularity of privilege assigned, the more detailed and complex policies become. The system I created for my PhD research, FBAC-LSM, r
    • Systems such as SELinux and AppArmor allow precise policies to define the types of actions and resources which are made available to each application. However, the finer the granularity of privilege assigned, the more detailed and complex policies become.

      I don't see how this is really a problem though, particularly with Apparmor. When you want to create a profile, you just start the app in profile mode, run it through its usual paces then hit save. The profile runs automatically when apparmor is restarted. That seems like an almost point and shoot level of ease to enabling very robust security that anybody that's comfortable with the command line can do. Suse even has a GUI for it. With Ubuntu, a simple apt-get install apparmor-profiles installs ready

      • While AppArmor can be considered a huge improvement usability-wise over some previous systems, policy can still be extremely complex, as it exposes the complexity of the resources used by applications and platforms. I ran a usability study, which amongst other things showed that even many advanced users and experienced Linux system administrators cannot successfully vet the policies created in learning modes such as that used in AppArmor. One of the problems with this approach is that typical applications c
        • Thanks for your very polite and informative response. As soon as I get a little bit of time, I'm definitely going to take a close look at your project.
  • Saw Qube.

    Thought Cobalt.

    Leaving sad.

  • I've had a lot of problems with EC2 that upgrading to at least 3.3 would
    fix. I just hope Amazon would start thinking about upgrading EC2 or migrate
    it from Xen as they seem to have gotten stuck with 3.1 and I've seen
    nothing that indicates there's going to be an upgrade.

  • by spacepimp ( 664856 ) on Wednesday April 07, 2010 @02:35PM (#31765460)
    The real issue still resides. The end users (PEBKAC). Take my father for example. Sure you have a Qube for banking and Qube for work and a Qube for home use. Now the home use one where he does his "Magic" or whatever he does to infect/taint/destroy any PC I put in front of the man, gets entirely infected Spywared/Malwared/chuncked and muddled. So he can't get to his phishing emails about how to make millions in the internets and by getting the diamonds out of Namibia. He cant do that from the infected Qube. He'll then go up the chain to his private banking Qube to install his makingmillions.exe so it will work again. Long story short.... Some people cannot help themselves but by being victims. I'd give the man Linux but he always finds a reason it's keeping him from being successful... I know by keeping these Qubes sandboxed it will be harder for it to get the taint, but they will find a new way to find my father.
    • But his banking Qube should be setup not to allow executaion of any software(Except the banking software). And if it's using a webbrowser it should be setup only to access the banks website.

      But I still think that having banks give our/sell special purpose computers with a rom chip as only storage is the best way to prevent hacking of bank accounts.

      All money transfer operations should then be send to this computer, where the user would then have to confirm the data on the special purpose computer, which coul

      • by sowth ( 748135 ) *

        You mean a "Trusted Computing" device? Yes, and maybe we should call congress and ask them to try passing the SSSCA [cryptome.org]again so Palladium will be required by law. Then Microsoft can rule us all!

    • by garaged ( 579941 )

      that is an oversimplification, yeah end user is a big point but Joanna has been using this aproach for a while, it might be a little paranoic, but that girl knows a LOT about security, and we will be following her advices for a few year i'm sure

  • Application virtualization having similar sandboxing has been out for several years now.
  • Great ... another 'OS' that has its own new set of problems ... plus before its actually useful in the real world you'll have to come up with ways to give it all the speed power and flexibility of the OSes we use now, which it doesn't have ... by the time you add it back you'll end up right back where you are now.

    Apps and data is useless on an island. When you're on an island you're safe from attackers.

    To actually do something useful however, data needs to move on and off the island, at which point, you're

    • What this virtualization will do, if it works, is prevent applications from making any unauthorized changes to the OS. On my box at home, if I run a program, I'm taking the chance that it can't get root access somehow and mess with my system files. If it's running in a virtual machine, and can't get out of it, it can trash the virtual machine all it likes and I'll just release the VM when I've run the program and spin up a new one to run the next program.

      It isn't perfect, of course, even if it can prev

  • IBM's virtual storage OSes (OS/VS1 through the current z/OS all share a lot of common componentry) in conjunction with hardware architecture have similar ideas: Each system service runs in its own address space. You have to be an authorized system service to communicate between address spaces, if you try otherwise your program fails. If you try to store outside of your virtual address range, your program fails. To become an authorized program requires permission in one way or another from system admini
  • From the article:

      "These mechanisms finally move the bull (untrusted data) from the china shop (your data) to the outside where it belongs (a sandbox)."

    The Mythbusters showed that bulls will try to miss china if it's at all possible. http://mythbustersresults.com/episode85

  • It will be called "Tesseracts"...
  • "Qubes" is a frikken terrible name. Something like "stained bedsheets" would be much better (Linen-XXX, geddit?). But I suppose a sense of humour is too much to expect nowadays.
  • GNU HURD [wikipedia.org], thank you very much.

    It's just around the corner.

  • Being able to create AppVMs on the fly is a great idea, and one that should be a common OS feature. But it sounds like it will have the same problem that a site-specific browser does: how do you ensure that a given operation will be carried out in the proper domain?

    For instance, you get an email from a colleague (in your email security domain) that includes a link. You click the link. Now, in which domain does the browser window open?

    If you tell me I need to copy the link, switch to my random-crap-from-coll

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...