Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Microsoft Technology

MS Publishes Papers For a Modern, Secure Browser 296

V!NCENT writes with an excerpt from a new publication by Microsoft: "As web sites evolved into dynamic web applications composing content from various web sites, browsers have become multi-principal operating environments with resources shared among mutually distrusting web site principals. Nevertheless, no existing browsers, including new architectures like IE 8, Google Chrome, and OP, have a multi-principal operating system construction that gives a browser-based OS the exclusive control to manage the protection of all system resources among web site principals. In this paper, we introduce Gazelle, a secure web browser constructed as a multi-principal OS. Gazelle's Browser Kernel is an operating system that exclusively manages resource protection and sharing across web site principals." Here's the full research paper (PDF).
This discussion has been archived. No new comments can be posted.

MS Publishes Papers For a Modern, Secure Browser

Comments Filter:
  • by zappepcs ( 820751 ) on Sunday February 22, 2009 @02:34PM (#26950125) Journal

    Grammar problems aside, TFA blurb is difficult to read and talks about MS offering a web browser that is an OS Kernel.... that is secure... and backward compatible!

    I can only conclude that this website has been hacked, and this is a huge joke. Seriously, this sounds like MS PR machine trying to pour salt directly in the wounds of the boardmembers, or this was written by a person suffering delirium after being hit in the head by a flying chair. Well, perhaps it's just MS Marketing department trying reverse psychology?

    In any case, it's rather surreal to read those words.

    I'm off to check that there are no foreign substances in my coffee.

  • Virtual Machine (Score:3, Interesting)

    by nurb432 ( 527695 ) on Sunday February 22, 2009 @02:51PM (#26950273) Homepage Journal

    Stick a full VM into the browser. Problem solved. Except of course for the huge resources needed to view even the simplest of pages.

    The entire push over the last few years to transferring processing load back onto the client is the wrong direction in my opinion, and the browser should remain a THIN client like the original intent. Keeping it a thin client by nature would be secure.

  • Re:Does it really (Score:5, Interesting)

    by harry666t ( 1062422 ) <harry666t@DEBIANgmail.com minus distro> on Sunday February 22, 2009 @02:55PM (#26950315)
    > process creation overhead

    Why does Windows have so much more overhead for creating processes? What is it about the Windows processes that makes them cost that much?
  • Re:Does it really (Score:4, Interesting)

    by UnderCoverPenguin ( 1001627 ) on Sunday February 22, 2009 @03:11PM (#26950437)

    Why is it so hard to see that a secure browser could be done using existing operating systems?

    My quess would be that is it more palatable to call something completely new more secure than anything we currently have than it would be to concede a competitor is more secure (even if you are not MS).

  • by lkcl ( 517947 ) <lkcl@lkcl.net> on Sunday February 22, 2009 @03:49PM (#26950719) Homepage


    why is it so hard to then imagine that, given that the "browser" is doing everything that you can also do with desktop widget UI toolkits, why is it so hard to appreciate that you need the full range of OS technology to support that desktop

    I could see a case for it. I could also see a case for doing it WITHOUT modifying the full range of OS technology. Why is it so hard to see that a secure browser could be done using existing operating systems?

    sorry, i assumed it would be clear. applications running within the browser are becoming more like _real_ applications - _real_ "desktop" applications, especially with downloadable-executable-code ( "plugins" such as as adobe ) having been thrown into the mix.

    and you have multiple of "applications" running simultaneously.

    therefore, you have security implications, application stability implications, and much more [i recently had firefox crash out-of-memory on linux, and i have 2gb of ram and 3gb of swap space].

    therefore, you need to start looking at isolating the applications from each other, whilst also allowing them access across a common API to a central set of protected resources (screen, keyboard, mouse, other devices, memory, networking), to be able to communicate across that boundary without impacting any other applications or the central resource management layer itself.

    and i think you'll find that if you look closely, that's pretty much the definition of an OS.

    so, working from the requirements - the expectation that good, hostile, rogue or simply badly designed applications all need to be given a chance to run, you arrive naturally at the rather unfortunately-logical conclusion that the only decent way to fulfil the requirements is with an actual full-blown operating system.

    to believe that anything else can fulfil the requirements, to provide multi-tasked application stability and security, really is sheer delusion, or is... like... expecting a 1980s apple mac OS with a 68000 CPU and no Virtual Memory support, to be "secure". ... actually, there _is_ one other possibility: Security-Enhanced Linux (specifically, the FLASK security model behind SE/Linux). and we know what people think of _that_, despite SE/Linux being incredibly good at its job.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Sunday February 22, 2009 @04:08PM (#26950877)
    Comment removed based on user account deletion
  • by lkcl ( 517947 ) <lkcl@lkcl.net> on Sunday February 22, 2009 @04:24PM (#26950999) Homepage

    Stick a full VM into the browser. Problem solved. Except of course for the huge resources needed to view even the simplest of pages.

    The entire push over the last few years to transferring processing load back onto the client is the wrong direction in my opinion, and the browser should remain a THIN client like the original intent. Keeping it a thin client by nature would be secure.

    noooo, nonono can do - yes it would be secure, but times have changed _drastically_. what's happened is that as the desktop wars got ridiculous (and i don't just mean between different OSes, i also mean between win95, xp and up), people simply moved to the browser itself to provide access to applications. all the talk of "ubiquitous computing" has actually _happened_.

    and, as the expectations of web infrastructure got ever greater, that origial "thin client" architecture began to look... well... thin! so along came flash, and javascript, and god help us java, and then AJAX, and then GWT [google.com] and Pyjamas [pyjs.org] which _really_ make it clear that the browser really _is_ just another "widget set" like Python-QT4, Python-GTK2 or Java Swing, and somewhere rather unfortunately along the line silverlight got added to the mix.

    and once you're down this road, there really is no turning back. you're now running complex comprehensive applications such as gmail.com, google apps and WebOS and i do _mean_ applications side-by-side in the same "space" and it's just getting too much for the poor little browsers, which were never designed to act as "operating systems".

    so i think what we're seeing here is the recognition of the fact that browsers have to become what OSes were designed to do, because browsers are now taking over from what OSes were _supposed_ to be doing, because everyone's moving inexorably to online interaction, now, instead of "isolated desktop".

    so is anyone _really_ surprised that the solutions proposed are to use tried-and-tested proven technology, just moving it to where the focus has gone? current browser technology can be compared to OS technology of the Windows 1.0, GEM/DOS and early Mac era!

  • Re:Does it really (Score:3, Interesting)

    by Anonymous Coward on Sunday February 22, 2009 @06:04PM (#26951731)

    No, Mach had two problems.

    First and foremost, messages were not idempotent, and while the system allowed for reentrancy, it did not allow for at-most-once processing of multiple identical messages. Among other things this complicated locking and diminished locality of reference, which has grown important in the presence of hierarchical memories and non-uniform access times in multiprocessor systems and clusters.

    This problem is fundamental and architectural in Mach, but it is not to message-passing microkernel architectures in general.

    Darwin 8, for example, explicitly considers cache hierarchies and NUMA, in part because at the time of Mac OS X 10.4, essentially every computer Apple was selling was dual-processor, and the high end was shipping shared L2 caches, rather than just shared main memory).

    Mach also had a very narrow trust boundary that did not scale very well. Rights propagation should have been distributed as much as possible, taking lessons from Kerberos. Persistence of trust is important to avoid the constant recalculate & compare access rights system in Mach.

    A number of these problems were fixed in Darwin 9, and previews of Darwin 10 suggest a great deal of thinking has gone into "third-party-introduction" rights acquisition distribution (which is also handy for Grand Central and clustering generally), as well as some ideas from Mach 4.

    I would be highly skeptical that Microsoft has found a way around the performance problems that the Mach people missed

    1. This is about Microsoft Research. Neat ideas, no productization, less cutthroatery.

    2. MSR has half of the Mach team in it (the other half is at Apple or has retired from there). Rashid, for example, admits mistakes and tries to learn from them. Tevanian followed "great artists ship" directives, and Darwin 9 / Mac OS X 10.5 has evolved into something with superior scaling properties to earlier version of Mac OS X (10.0, 10.1, 10.2...). No doubt MSR's microkernel research people have checked out the open source and otherwise published work by their former colleagues at Apple. (They seem to use Mac Book Pros running Mac OS X in public a lot!)

    Back to the main idea. It's kinda neat: each web site becomes a user with separate privileges from all the others, and different from the user who started the browser. This should prevent "home invasion" attacks at the very least, and assuming sensible defaults are placed on permissions owned by the browser-starting user, her or his files should be safe from malicious accesses.

    If this does not impose a burdensome slowdown on "power users" hopefully MS's idea will be implemented by someone. MSR ideas are often unlikely to be implemented by MS, however...

    Finally, your parent wrote:

    Using separate processes to render content on a single page causes significant latency due to process creation overhead.

    But exactly this kind of thing (multiple processes owned by possibly mutually-hostile users drawing on a shared screen) is normal in many operating environments.

  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Sunday February 22, 2009 @06:10PM (#26951777)
    Comment removed based on user account deletion
  • Re:Does it really (Score:2, Interesting)

    by djelovic ( 322078 ) <(dejan) (at) (jelovic.com)> on Sunday February 22, 2009 @06:40PM (#26952027) Homepage

    Windows thread creation costs more than Unix thread creation because it does more. Whether that work is useful to most people is somewhat dubious.

    Windows kernel is roughly based on VMS, which at the minimum has a different security model than Unix. The one in Windows is finely-grained, while the Unix one is fairly coarse.*

    In addition, a bunch of things in Windows have thread affinity and that has to be set up too. The concept of thread affinity for things like windows is pretty good for a desktop OS, fairly lousy for a server one.

    Dejan

    * Windows security model is more powerful than Unix's user/group/world one (ask any large corporation admin), but comes at a significant performance and complexity price. I can teach any programmer the Unix security model in less than a minute, but I know very few Windows programmers that know anything about Windows ACL/SID/Token APIs. (Yes, ma, that's that last parameter in all those calls that you always set to NULL to inherit from the thread settings.)

  • Re:Princi-what? (Score:3, Interesting)

    by Hurricane78 ( 562437 ) <deleted @ s l a s h dot.org> on Sunday February 22, 2009 @08:05PM (#26952691)

    Oh, and look at mobile phones. What is the language you have to write is, if you want it to work on every phone, without learning every single OS's API?
    Java! (With OpenGL ES as a very nice addition.)

  • Re:probably old info (Score:2, Interesting)

    by ShieldW0lf ( 601553 ) on Sunday February 22, 2009 @08:55PM (#26953099) Journal

    What the fuck does all this crap about forks and threads have to do with Microsoft and their efforts to secure your computer against you?

    There's a bunch of bullshit there about "Multiple Distrusting Principles". What that means is a bunch of corporate organizations who don't trust you, and don't want you to remain in control of your machine.

    This isn't about some website engaging in cross site scripting attacks and screwing with users. This isn't about user security at all.

    What this is about is allowing select, approved types of mashups to occur while still keeping everything totally locked down. It's about making not having control over your own machine somewhat palatable so maybe you'll be dumb enough to buy into this virtual prison system.

    This is for those assholes who abuse Flash to keep you from downloading media to your hard drive for later viewing, so to speak. They see all this Web 2.0 stuff going on, and they want to get in on the action, but they don't want to remove the locks to get there. They want them made stronger.

    Goddamn I'd love to burn those motherfuckers at the stake.

    Ok, go back to your inane wank about forks and processes... I'm done ranting here.

  • Re:Princi-what? (Score:4, Interesting)

    by ozphx ( 1061292 ) on Sunday February 22, 2009 @08:55PM (#26953107) Homepage

    Events/delegates do exactly what they are intended to do. They do not attempt to hide the fact that they reference the subscriber. If you are finding this an issue I suggest you take a look at IDisposable, finalizers or weak events.

    Don't think you can just pick up a tool and bang out code with a silly monkey grin on your face without understanding how it works.

    LINQ is a nice syntax. Beats a load of "new SomePredicate(left, right)". Of course this is not going to stop a bunch of newbies picking it up and not understanding how it works.

    If you are hiring a bunch of nubs, then I suggest you put up a big "CHECK ACCESS TO MODIFIED CLOSURES" poster.

    An increase in expressiveness in the language is a good thing. It doesnt magically mean that less skilled devs can suddenly churn out complex bug-free software without knowing what the hell they are doing though...

This restaurant was advertising breakfast any time. So I ordered french toast in the renaissance. - Steven Wright, comedian

Working...