Forgot your password?
typodupeerror
Microsoft Security Software IT

2010 Will Be the Year of Sandboxing Apps 203

Posted by timothy
from the layers-within-layers dept.
Trailrunner7 writes "In a guest editorial on Threatpost, Mac hacker and security researcher Dino Dai Zovi writes that 2010 will be the year that software vendors get religion about sandboxing untrusted data in desktop apps. 'Instead of the usual top ten lists that are all-too-common with predictions for the new year, I have just one: 2010 will be the year of desktop applications handling untrusted data in sandboxed processes, and it will be about time. The largest Internet security threats now arrive through malicious web pages or e-mail attachments. This is because attackers are opportunistic and these are the weakest links especially because they easily pass through every firewall. Security is not and never was about SYN packets, it is about data: the software attack surface that attacker-controlled data interacts with and what sensitive data the attacker can get a hold of if they can exploit vulnerabilities in that software.'"
This discussion has been archived. No new comments can be posted.

2010 Will Be the Year of Sandboxing Apps

Comments Filter:
  • And the year of.. (Score:2, Insightful)

    by sopssa (1498795) *

    .. bloat.

    Just look at how slow IE8 is to use.

    • ...and if you think Exchange 2007 is evil now (what with store.exe arrogantly sucking down 95% of your available RAM, no matter how much RAM you have, whether it needs to or not), I simply cannot wait until someone gets the idea that hey, maybe they should sandbox services too!

      /P

      • by alen (225700)

        and what exactly is the point of having RAM go unused?

        • Re:And the year of.. (Score:5, Informative)

          by spun (1352) <loverevolutionary@@@yahoo...com> on Wednesday January 06, 2010 @04:44PM (#30674634) Journal

          and what exactly is the point of having RAM go unused?

          File cache. RAM unused by bloated applications gets used by (most) operating systems to cache files, resulting in quicker disk access.

          • Re: (Score:2, Interesting)

            by Anonymous Coward

            Exchange takes the file cache into account when setting its cache size. If you start paging it can reduce its memory usage. The point here is subtle:
            Free memory = Bad (wasted resources which can be used to reduce I/O)
            Paging = Bad (bad performance)

            So Exchange increases its memory usage unless the machine is paging.

            • Re: (Score:2, Interesting)

              by BlackSnake112 (912158)

              Great Idea, but it looks like exchange is doing it wrong since the exchange machines I have seen are often using more memory then is installed so they are paging and using all the RAM it can. These are not the massive exchange systems that large companies have. These exchange systems have 5-20 email accounts on them. Not large at all. So why is exchange using between 5GB (on a systems with 4GB installed RAM) and 18GB (on a system with 16GB installed RAM) of RAM? I am not the exchange admin, I pointed it out

              • by dave562 (969951)

                What FUD are you spewing? I am looking at an Exchange box right now that is hosting 100 mailboxes. Store.exe is using 683,000K of memory. The Symantec AV for Exchange services are eating another 700,000K. Combined that is under 2GB.

                Something is wrong with your specific Exchange server.

              • by toadlife (301863)

                So why is exchange using between 5GB (on a systems with 4GB installed RAM) and 18GB (on a system with 16GB installed RAM) of RAM

                As I understand it, there is a performance penalty involved with paging data into RAM. By leaving the pages in RAM after programs exit or release memory, the the next time that data is read, it can be accessed faster because it doesn't need to be paged into RAM again. That "extra" (the amount of memory being "used" over the physical memory) is not actively being used and does not serve to degrade performance. It's just paged out. Linux and FreeBSD will do this too, and I'm pretty sure that Windows was a la

            • by Jawn98685 (687784)

              Exchange takes the file cache into account when setting its cache size. If you start paging it can reduce its memory usage...

              All of which takes time. No, thank you.

        • What's the point of operating systems that can run more than one program if we're happy to let one program eat up all the memory?
          • Re: (Score:3, Insightful)

            by alen (225700)

            unless you're using SBS, most organizations will only run Exchange or SQL or one major app on a server. on our servers we're running the HP software and SQL on our database servers. we even put all the third party database drivers on a separate server so as not to cause any potential issues.

            • by Viol8 (599362) on Wednesday January 06, 2010 @05:26PM (#30675132)

              "unless you're using SBS " or run unix/linux " most organizations will only run Exchange or SQL or one major app on a server"

              There, fixed it for you. Curiously unix can generally cope with running more than one app/DB without falling over or having one app
              screw up the other.

              "we even put all the third party database drivers on a separate server so as not to cause any potential issues."

              Well that sums up running a Windows server doesn't it.

              • by aix tom (902140)

                Yep. The only thing that makes Windows "servers" half-way bearable is to put one server for each single service you need in a virtual machine.

                So basically the whole thing IS already sandboxed.

                • Actually, single purpose VMs are a fair idea for other OSes as well. If the configuration for a given environment is a single application's needs, and that can be replicated easily, it's more maintainable. If you have to configure a dozen apps for every server, or instance it becomes harder to manage.
              • If you have the skill, and are willing to troubleshoot the process, installing multiple applications on a Windows server is quite entirely doable.

                The problem comes from the fact that it's so complicated to get it working correctly and even more difficult to troubleshoot it in the event of issues that you end up with a lower TCO in a virtualized "one VM per app" type of scenario.

                I, not having a large amount of Windows licenses or servers to work with on certain occasions (and having worked for peanuts on o
            • by Gr8Apes (679165)

              You've merrily drunk the MS coolaid. The idea that a server is only good for 1 task sells... more hardware and software!

              We happily run a DB, mail server, web application server and more on a single decently sized box.

              (Yes, there are situations where you need to separate functions due to pure load optimization/performance requirements, but Exchange doesn't meet those criteria as in most cases the server is sitting idle except when a mail virus comes along, in which case they fall over)

          • Also, lots of memory used =/= efficient programming.

            • It all depends on what efficiencies you are seeking. Many organization value developer time over cpu/memory usage and so never bother profiling their software (the only way to do it) to look for more efficient machine use. "Worse is better" applies to source code as well.

            • Actually, the more I think about your statement, it's just wrong.

              You are assuming lots of memory used implies memory is wasted. I wouldn't assume that. There is usually a time/space trade off in algorithms.

            • by abigor (540274)

              People who say things like this typically know nothing about programming or how operating systems allocate resources, let alone how to compute efficiency. No offence.

            • by neokushan (932374)

              Please go tell John Carmack that he didn't efficiently program the early Quake Engines, then. They use Look up tables (i.e. extra memory) to calculate normals instead of having the CPU do it. Why? Because it's actually faster, or at least it was at the time. That's just one example off the top of my head that proves you don't actually know what you're talking about.

    • Re: (Score:2, Funny)

      by SnarfQuest (469614)

      If you want to leave a lot of openings in your sandbox for malicious software to work through, you have to expect things to slow down.

    • by alexhs (877055)

      And mostly useless bloat at that.

      It's only adding one added layer to the dancing bunnies problem.

      Virtualization is detectable. Your dancing bunnies malware will complain that it needs better access to the computer in order to run.

      You only need memory protection, and an OS with some access control mechanisms.

      If the application can't run in those constraints:

      • Educated users will know that something is going wrong;
      • Uneducated users will escalate privileges one by one until the computer is pwned.

      Now what I think

      • Re: (Score:3, Insightful)

        by lgw (121541)

        It's easier to snap-and-rollback a virtual machine than a physical machine when the user causes it to get pwnt.

        It's easier to snap-and-rollback a sandboxed (jailed) app than a virtual machine.

        It's easier (at least in Windows) to give the user admin control of his virtual machine than not. That doesn't mean he needs any sort of privledges on the host machine. Depending on the app sandboxing paradigm, the app can run as admin (as far as it knows), but the user doesn't have the ability to escalate the apps p

      • by neokushan (932374)

        And who's going to pay to have all these signed apps run?

        That scenario ends up going one of two ways:

        1) THe signing process is made free and becomes self-signed (Think Android), which completely negates the whole point of it as anyone can sign anything.

        2) The signing process costs money, no company will do it for free so in order to get your app signed, you need to pay. This isn't cheap, often 3-digit territory and only really feasible for fairly large companies. Indie developers are screwed. Open source so

    • Re: (Score:3, Interesting)

      by LordLimecat (1103839)
      Chrome uses a sandbox model, and it seems to do OK. Programs running in Sandboxie [sandboxie.com] seem to run pretty quick too. Is it possible not all sandbox apps are created equal?

      I'll also note that IE8 has more security than IE7, and yet curiously runs much faster than its predecessor. Seems like security vs speed is a false dichotomy.
  • This is much more of a wish, not a prediction. Microsoft has only barely just started to offer sandboxing. It's also not common practice by other desktop application developers.

    • And I predict this will be the year of 'Spun getting freaky with Kari Byron of MythBusters.'

    • Re: (Score:3, Insightful)

      by tempest69 (572798)
      Sandboxing is long overdue. It's a primitive step in the right direction, but it's needed to take the whole host of steps that can make a stable system. There is a freakload of work that needs to be done to get past the mess that exists in current operating systems. But instead of making a really innovative system, we keep getting more of the same: incremental improvement to the desktop system.
      Sandboxing is a decade late, we should be so much further by now.. dang.

      Storm

      • Even in Windows.

        My home system? Windows XP. And I use VMware Player to access the internet. And nothing else. That's the trick. Towards that end here's what I've done:

        Step 1. I got a USB 2.0 10 Base T network doohickey. Then I plugged it in to my Windows box. It has never heard of the thing and wanted a driver. Cool! Step one - passed. There is no way my main machine can use this thing to get on the net. FWIW, if it had known how to connect to this thing I would have went and found the INF f

        • by lgw (121541)

          There are proven VMware breakout exploits. You've merely forced the malware to use somehting other than the network to escape. Not a practical risk today, but it's only a matter of time, unless we re-invent "trusted computing" with 100% less evil, and encrypt the host resources (especially memory, including CPU cache) at the hardware level.

    • Microsoft might be doing more than you think. TFA brings up Protected Mode Internet Explorer, but Microsoft is incorporating sandboxing-type ideas [infoworld.com] into Office 2010, too. For example, before it opens files, Word 2010 will validate them against known-good and known-bad schema. The idea is to detect potentially risky files/actions and run them with reduced privilege. So if a given file was created using an old version of Word that includes implicit vulnerabilities, for example, Word 2010 will open it in read-o

      • by Gr8Apes (679165)

        ... So if a given file was created using an old version of Word that includes implicit vulnerabilities, for example, Word 2010 will open it in read-only mode with macros disabled, while giving the user a button to activate the disabled features (with an "it's your funeral" warning message)

        Ahhh good - UAC all over again....

  • Windows 7 (Score:4, Funny)

    by gbjbaanb (229885) on Wednesday January 06, 2010 @04:30PM (#30674452)

    Great, I just upgraded from XP to Windows 7 and now all my apps have to be run in XP Mode's virtual machines. Thanks Microsoft. :)

    • Re: (Score:3, Interesting)

      by wisty (1335733)

      Doesn't FreeBSD has some sort of "jail" functionality? And has since the year 2000?

      I'm not convinced that virtualizing a whole frigging OS is always the best. It's great for running XP or Linux on a MacBook; or XP on a Linux box (if Wine isn't enough), but the RAM use high enough to severely limit it's uses for security.

      I'm not using a browser if it opens a new OS for every damn tab, for example.

      OS tools (jails, lower level user accounts, etc) are going to be better. Or using a State Machine, or some other

      • Re: (Score:3, Interesting)

        by gbjbaanb (229885)

        you're looking at the chroot [wikipedia.org] command, Linux has it too.
        It basically restricts an app to the directory and subdirs only, which only causes problems when they try to reach out of the jail to, say /tmp or /etc.

        According to wikipedia, chroot has been around since... 1982.

        (yes, FreeBSD jails [wikipedia.org] are better, but still based on the same concept).

    • That's not a bug, it's a security feature. At least, that's how we'll market it.
  • Sandboxes are a tried and true idea, they work well. It's about time

  • by Lumpy (12016) on Wednesday January 06, 2010 @04:32PM (#30674480) Homepage

    sandboxie... Great program, will NOT work on a 64 bit OS.

    IT has kept my Daughter's PC free of crap because she refuses to not click on everything and not use Internet explorer... so I sandboxed it. Click on everything, it's all sandboxed.

    • by sakdoctor (1087155) on Wednesday January 06, 2010 @04:39PM (#30674584) Homepage

      Whoa! Your daughter is off the rails, and your soft approach to parenting is not helping.
      Install linux on her system right now, and don't give her the root password until she's 18!

      • by Lumpy (12016)

        Yup, WAYYYY off the rails... she turns 18 this week. Sadly she is very much like her mother.

        • Re: (Score:2, Funny)

          by Anonymous Coward

          [dont-take-it-personal][joke-to-easy-to-resist]
          "Much like her mother"? she has poor taste in men?
          [/joke-to-easy-to-resist][/dont-take-it-personal]

        • So many tasteless jokes in such a short thread. Root, Box, turning 18, like her mother .... I ... must ... resist ...

        • by nschubach (922175)

          She has a daughter and a husband as well? :p

    • Re: (Score:3, Informative)

      "Windows 64-bit: Full support for 64-bit is available in recent beta versions of Sandboxie. Click here"

      Looks like they are working on that. :)

    • I use that too. Program I'm not sure about? Run it in SandboxIE and delete the Sandbox when I'm done. Website that might impact my security? Run it while my browser is under SandboxIE so I'm safe from viral threats.

    • by tunapez (1161697)

      Been using SandboxIE for 3 years now. Highly recommended utility.
      Right-click any program and run it sandboxed.

      Additionally, useful for testing captured malware. In a VM is recommended, never know if/how/when it may be subverted.

  • by Anonymous Coward on Wednesday January 06, 2010 @04:37PM (#30674554)

    Maybe we should just stop using the goddamn browser as an operating system. It was never meant to be anything more than a way to view mainly static documents, and quickly access other linked documents.

    While some interactivity is of course useful and sensible, some fools have gone off the deep end and think we should treat the browser as some sort of an application development platform.

    Of course, anyone who has done real application development under a real operating system, even if it is just Windows, knows how poorly the browser is as such a platform. It's clear that everything, from JavaScript to AJAX to Flash, has been tacked on as a shitty afterthought.

    The answer isn't sandboxing. The answer is that we need to go back to using the browser as just a browser, and nothing else. And any real applications that demand network connectivity should be written as such, and run outside of the browser.

    • Re: (Score:3, Insightful)

      by phantomfive (622387)

      The answer is that we need to go back to using the browser as just a browser, and nothing else.

      It's never going to happen. The browser is too useful for too many other things. If somehow we managed to get the browser to return to being just a page viewer, someone (like Microsoft) would create an API for online applications and call it a non-browser. In fact, this was the original idea behind .net, and why it is called .net. Online applications AKA cloud based applications are here to stay.

    • Re: (Score:2, Insightful)

      by AvitarX (172628)

      It was never meant to be anything more than a way to view mainly static documents, and quickly access other linked documents.

      You are wrong wrong wrong. For many years now the browser has been meant for more than that. It originally may not have been meant for more than that, but to say it never was is stupid. The reason MS panicked about it was there was an express intent of making the browser more than that.

    • by Colin Smith (2679) on Wednesday January 06, 2010 @05:06PM (#30674840)

      Web servers don't serve html documents any more, they serve remote procedure calls from javascript front ends.
       

    • If you stopped and asked why those 'fools' would want to accomplish a task in a browser, you'd realize why 'writing apps that run outside of the browser' isn't the ideal situation.

    • by Locutus (9039) on Wednesday January 06, 2010 @05:26PM (#30675126)
      and things like ActiveX don't apply to the "been tacked on as a shitty after thought" comment? From what I've seen, Microsoft is the king of tacking things on as a shitty after thought otherwise they'd not still be known for security and reliability problems. Rebooting a Windows computer is still the number one recommendation for 'fixing' a broken Windows system across many IT orgs and reinstalling Windows is probably still in the top 10 things done to 'fix' the computer.

      Besides, it's been Microsoft's attacking of software application vendors on their platform which has lead to so much being attempted in the browser since it isolates them so much from Microsoft. You don't hear so much of what software vendors software broke at every release of a new version of Microsoft Windows. That's because more and more business applications are fed from app servers to browsers and a minimum standard feature set must be met in the browser for it to be useful across the web and therefore IntraNet.

      This has little to do with the browser being the problem, it is about the design of the Windows OS not doing it's own memory protection and letting applications run many things as admin when they should be run as the user and they should not be accessing OS or other application space memory. This is another crutch for a bad design but it'll help sell more hardware if that's what you want.

      LoB
    • That horse bolted the barn a long, loooong time ago.

    • The answer isn't sandboxing.

      Yes it is, because even if the browser didn't have everything but the kitchen sink in it, it could still (for example) have a buffer overflow bug in an image decode library. When that bug gets triggered, you want that process to be "nobody."

      • by mlts (1038732) *

        The Web browser can be extremely secure. However, attackers are going after isn't the browser any more. They are gunning for the add-ons, both the big names and the small things. All it takes is a hole in even a relative obscure add-on, and an attacker now has code running in a security context of the add-on. As it stands now, the security context of an add-on is the same as the browser, which is usually the same as the user... and often times having root privs.

        Even user nobody is somebody. You want a

  • by bcrowell (177657) on Wednesday January 06, 2010 @04:40PM (#30674594) Homepage

    All security problems are easy to solve if you have users who are sophisticated about security, and motivated to put up with inconveniences. The real world isn't like that.

    A proposal like this inevitably requires that the user understand something about the sandbox, and also requires that the user go through various hassles because of the sandbox. They're going to perceive it as a hassle, because the sandbox is going to prevent them from doing things they would otherwise have done. If they're unsophisticated and unmotivated, they'll just see it as something to work around.

    Not only that, but this isn't an optimal solution. A flash game has to be a Turing-complete program. A memo doesn't have to. The simple solution is just to stop embedding Turing-complete programming languages in file formats that don't require them. Adobe actually started by designing postscript as a Turing-complete language. That had some unfortunate consequences, since, e.g., you can't predict whether a program written in a Turing-complete language will halt, so in principle you can't predict whether a document will take forever to come out of the printer. The realized that that was a mistake, and when they designed pdf, they intentionally made it not Turing complete. Now we've come full circle, and they've added a Turing-complete language, javascript, back into pdf. That's just bad design. The solution for users is actually pretty easy: if you're using Adobe Reader, turn off javascript.

    • by jhol13 (1087781)

      user understand something about the sandbox

      Why? The user has no clue about ssh privilege separation and still it works extremely well.

      user go through various hassles because of the sandbox

      I agree Microsoft nor Linux can do it well, but just perhaps Apple might be able to do it seamless.

      A flash game has to be a Turing-complete program.

      Turing complete does not mean "must have access to whole system".
      Actually it means nothing in this context (there exist exactly zero Turing complete systems in the "real world"). AFAIK a PDF can take "long enough" to print that it makes no difference whether it is ever going to halt or not.

  • Wasnt sandboxing the cool word about 10 years ago?
    • Re: (Score:3, Funny)

      by MrEricSir (398214)

      Yes, but a big bully came and stomped on all our sand castles. Now that we've grown up a little, it's time to try again.

      • Yeah, maybe if we built a good enough sandbox, Microsoft wouldn't be able to stomp our sand castles.
    • by danlip (737336)

      Wasnt sandboxing the cool word about 10 years ago?

      Actually 15 years ago, when Java came out.

  • wha? (Score:3, Insightful)

    by jasno (124830) on Wednesday January 06, 2010 @05:06PM (#30674862) Journal

    Security is not and never was about SYN packets

    Security is about everything, period.

  • 'Instead of the usual top ten lists that are all-too-common with predictions for the new year, I have just one: 2010 will be the year of desktop applications handling untrusted data in sandboxed processes, and it will be about time.

    Let us all know how that works out for you this time next year, big boy?

  • Just yesterday was reading about Isolate (http://code.google.com/p/isolate/ [google.com]) that looks going to the core of the problem. You can sandbox any app, but not needing to sandbox all the desktop/OS/etc for that. So if your browser or media player, or other programs could have a risk of doing locally something you dont want, you can run it in a way that don't touch or modify anything private. in a very easy way.
    • Re:Isolate (Score:4, Funny)

      by Fnord666 (889225) on Wednesday January 06, 2010 @06:18PM (#30675862) Journal
      From the Isolate web site:

      isolate currently suffers from some bad security bugs! These are local root privilege escalation bugs. Thanks to the helpful person who reported them (email Chris if you want credit!). We're working to fix them ASAP, but until then, isolate is unsafe and you should uninstall it. Sorry!

      This doesn't really sound like the solution most people looking for.

  • by Locutus (9039) on Wednesday January 06, 2010 @05:15PM (#30674988)
    really? sandboxing desktop apps? Look at what one of the design goals of any real OS is and providing security, memory protection( from other apps and OS space ), indirect access to hardware, and smooth multitasking between apps and OS are right up there near the top. Memory protection is WAY up there near the top unless you're looking at special purpose realtime applications or micro-controller apps. Now what we are seeing on Windows is yet another layer in an attempt to fix a bad design and one which will continue to slow down the system while pushing the hardware. It's great if you are out to sell more expensive hardware and you don't want lower end( cheaper priced ) hardware to run your software. You know, like how Vista ran so good on netbooks and how Windows 7 is better than Vista at that but still worst than Windows XP.

    Sandboxing is basically what virtual machines like VMWare, VirtualBox, KVM, VirtualPC all do. Off of Windows, it gives users a way to run Windows without rebooting their main OS. On Windows, it gives businesses a way to keep one crashing Windows server from taking down the other servers and in the desktop it lets users boot Linux without rebooting Windows. But for app protection? That's what the OS is supposed to be doing.

    LoB
    • by jpmorgan (517966) on Wednesday January 06, 2010 @05:54PM (#30675510) Homepage

      This isn't a Windows specific problem. The fundamental problem is the user/process model that's been popular since the inception of UNIX (maybe even earlier, I don't know enough about Multics to say): the idea that only users have identities and programs run under the identity (and permissions) of the user who runs it. If I'm running a game, there's no reason why it needs access to my tax spreadsheets, etc...

      All software should be running under its own identity and access to user documents should be through standardized user interfaces... i.e., the 'File Open' dialog is actually a part of the OS not the application, and also grants temporary permissions in addition to just selecting a file.

      We talk about the principle of 'least privilege' but in practice (with a few notable exceptions) the 'low-privilege' processes have the most important privileges of all: access to all our stuff.

      • by Compholio (770966)

        This isn't a Windows specific problem. The fundamental problem is the user/process model that's been popular since the inception of UNIX (maybe even earlier, I don't know enough about Multics to say): the idea that only users have identities and programs run under the identity (and permissions) of the user who runs it. If I'm running a game, there's no reason why it needs access to my tax spreadsheets, etc...

        You mean like SELinux? Brought to you by the NSA since 2000 and included in the kernel since 2003?

      • We talk about the principle of 'least privilege' but in practice (with a few notable exceptions) the 'low-privilege' processes have the most important privileges of all: access to all our stuff.

        What are you talking about? None of my users---whom I've made the embodiment of "least privilege"---have access to any of my stuff...

        -- your friendly neighbourhood BOFH

        • by jhol13 (1087781)

          But the game you play does have. Or "ls" (dir in Windows), although it needs only read access to directories.

  • Awesome! (Score:5, Funny)

    by InlawBiker (1124825) on Wednesday January 06, 2010 @05:18PM (#30675024)

    I was just handed a memo from a collection of all major software and hardware vendors on Earth, saying that security will be put ahead of profits from now on! It was delivered by a Unicorn, who got here on the gumdrop express via the rainbow highway.

    • But that's unrealistic! We all know that unicorns are pink and invisible. But by being invisible, the light they emit doesn't fracture in the same way as the sun's, so it couldn't have gone by the rainbow highway.

      I think a much more plausible hypothesis is that it was riding a flying pig.

  • I can hardly wait for the flurry of sandboxing
    patents.

  • by vlm (69642) on Wednesday January 06, 2010 @05:41PM (#30675314)

    Cool, instead of screwing up the simple task of validating inputs, we'll simply screw up the complicated task of sandboxing. Awesomeness!

  • Fundamental Problem (Score:3, Interesting)

    by Ohio Calvinist (895750) on Wednesday January 06, 2010 @06:00PM (#30675604)
    The fundamental problem is that users want their computer to do things. They want responsive rich media web applications so conventional wisdom to turn off everything but HTML rendering causes their computer to not do stuff it used to be capable of. The second problem is that in order for computers to do things, particularly in networked environments, is that processes could be working with trusted, semi-trusted or untrusted stuff (be-it content, code, whatever, it doesn't matter for the purpose used.) When security tools attempt to figure out what ought to be trusted or not trusted and gets it wrong, you either do something unsafe or you block the user from doing what they want to do (even if you or me would consider what they want to do as foolish or downright dangerous.) When users are expected to indicate what is trusted or not trusted they generally lack the insight to know what to pick, and vendors are at peril of designing annoying software that provides little true security if users always click "yes" causing the unsafe action to happen, or prevents their computer from working as expected, if they always click "no." Sandboxing can be effective to limit access to other application's data, but can greatly limit interoperability and requires the developer make some decisions on behalf of the user, or makes the developer ask the user how isolated the process is from other resources in a way that is meaningful and they they can understand what the consequences in either case will be if they approve (ideally at setup).
  • by argent (18001) <peter@slashdot.2 ... com minus physic> on Wednesday January 06, 2010 @06:04PM (#30675666) Homepage Journal

    Sandboxing means that once the attacker has used an input exploit to own the process, it has to perform a privilege escalation exploit to get out of the sandbox. The problem is that applications running in sanboxes have to be able to write files, read files, load and install plugins, execute helper applications, and generally do just about anything a regulat application has to. So the sandbox can't be very "strong".

    Instead of adding a leaky sandbox, how about reducing the surface area exposed to attack in the first place? Simplify the application. Get rid of things like XPI in Firefox and ActiveX in IE. Get rid of the need for third party plugins like Java and Flash (HTML5 goes a long way here). Get rid of the ability for network apps to masquerade as local apps (there's no reason a web page should be allowed to remove the status and address bar, for example). Don't even *offer* to automatically open a file after downloading. Remove that option from the browser completely. Get rid of Acrobat and other plug-in document viewers.

    Yes, this might make it less convenient for websites to "wow" the user. So what? I'd rather be safe than "wow"ed.

    • Re: (Score:3, Insightful)

      by hedwards (940851)

      Sandboxing means that once the attacker has used an input exploit to own the process, it has to perform a privilege escalation exploit to get out of the sandbox. The problem is that applications running in sanboxes have to be able to write files, read files, load and install plugins, execute helper applications, and generally do just about anything a regulat application has to. So the sandbox can't be very "strong".

      That's not really sandbox's fault, so much as the way that people design and run their OS. FreeBSD for example has the ability to combine flags and securelevel to prevent any changes to files that are so marked at all. It can be a pain in the ass, but it makes it very difficult for somebody to remotely install something to run at boot time.

      Additionally, a proper sandbox shouldn't allow one to write to any portion of the hard disk that's directly accessible to the OS, and should really require exporting t

      • by argent (18001)

        Yes, you can create a sandbox using a FreeBSD jail, and you can union-mount everything in the jail over a read-only file-system, and you can wipe the jail after every run, and that would be pretty secure. Personally, I run dubious software in a VM and roll back to a checkpoint afterwards.

        But then people would be a bit upset that their bookmarks didn't get saved and they couldn't download files. Because most of the software that would need to be sandboxed is the stuff that people use regularly. A *useful* sa

    • I'd rather be safe than "wow"ed.

      Your users (family members etc.) think they're safe, and want to be wow'ed. They're going to complain if they can't be.

      Don't even *offer* to automatically open a file after downloading.

      What's gained by having the user traverse a path of directories before opening the file manually, versus having the user explicitly ask for the file to be opened automatically at some later point? I mean, it's going to get opened one way or the other, right?

      When's the last time you downloaded a file and then immediately decided never to open it?

      • by argent (18001)

        What's gained by having the user traverse a path of directories before opening the file manually, versus having the user explicitly ask for the file to be opened automatically at some later point?

        Well, let me explain something. I've been a system and network admin for 20 years. I can't count the number of times that someone has come to me saying "um, I think I clicked the wrong thing and I think I have a virus". What they've done is some link has downloaded a file, and then IE has popped up a dialog asking

    • Re: (Score:3, Informative)

      by jhol13 (1087781)

      applications running in sanboxes have to be able to write files, read files, load and install plugins, execute helper applications,

      No, they don't.

      They can be made so that only way to access file system is by File Dialog (see Java Web Start / JNLP).

      • by argent (18001)

        Now you're talking about an application level sandbox around untrusted code. That's a good way of redusing the surface area, BUT it's also a completely different kind of animal than the article is talking about. He's talking about OS level sandboxes around the whole application itself.

  • I herd u like sandboxing, so I put a sandbox in your sandbox so you can run in a sandbox when you're running in a sandbox.
  • Well, it's nice if Microsoft and Apple get their desktops upgraded, but AppArmor is standard on Ubuntu and has rules for common desktop apps in place.

  • At the same time some people in the industry are talking about sandboxes other people are talking about adding yet more features to the browser so we apps can compete with boxed software. Added functionality nearly always wins over added security, so I expect we'll go into 2011 with even more avenues for Russian mobsters to lift your identity.

    Java has had a sandbox since forever, and it's virtually unused in commercial applications. Why? Because it's a pain in the ass to give the user everything he want

Machines that have broken down will work perfectly when the repairman arrives.

Working...