Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Microsoft Technology

MS Publishes Papers For a Modern, Secure Browser 296

V!NCENT writes with an excerpt from a new publication by Microsoft: "As web sites evolved into dynamic web applications composing content from various web sites, browsers have become multi-principal operating environments with resources shared among mutually distrusting web site principals. Nevertheless, no existing browsers, including new architectures like IE 8, Google Chrome, and OP, have a multi-principal operating system construction that gives a browser-based OS the exclusive control to manage the protection of all system resources among web site principals. In this paper, we introduce Gazelle, a secure web browser constructed as a multi-principal OS. Gazelle's Browser Kernel is an operating system that exclusively manages resource protection and sharing across web site principals." Here's the full research paper (PDF).
This discussion has been archived. No new comments can be posted.

MS Publishes Papers For a Modern, Secure Browser

Comments Filter:
  • by Anonymous Coward

    Principle. Principal. ?? WTF?

    • Re:Princi-what? (Score:5, Insightful)

      by Divebus ( 860563 ) on Sunday February 22, 2009 @01:34PM (#26950129)

      Fascinating. Microsoft murdered Netscape and Java for going in this direction a decade ago and now they're writing about it like they invented the notion.

      • But Netscape / Mozilla didn't continue this.

      • Re:Princi-what? (Score:5, Insightful)

        by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Sunday February 22, 2009 @01:56PM (#26950325)

        No. They tried to murder them for power. Pure power. IE was the one browser to rule them all.
        Fortunately they were too stupid to do anything useful with that power. They only saved the money to continue developing their web developer torture instrument called IE

        Luckily, then the great Mozilla rose:

        Mammon slept. And the beast reborn spread over the earth and its numbers grew legion. And they proclaimed the times and sacrificed crops unto the fire, with the cunning of foxes. And they built a new world in their own image as promised by the sacred words, and spoke of the beast with their children. Mammon awoke, and lo! it was naught but a follower.

        -- from The Book of Mozilla, 11:9 (10th Edition)

        And Java is as far from dead as possible. Sun won the lawsuit against MS, and Java is one of the most used server languages.

        I see the good of it. Without this event, there would be no Firefox, maybe no XHTML as we know it, not such a big popularity of open source software, and not the freedom of add-ins like AdBlock Plus or Greasemonkey and Firebug.

        But I do not thank Microsoft for that.

        • Re: (Score:3, Insightful)

          by pyrbrand ( 939860 )
          Actually, they murdered them for competition, as Corporations tend to do (I'm pretty sure there's no one on any side of these markets that would turn away market share).
          • by DavoMan ( 759653 )

            Actually, they murdered them for competition, as Corporations tend to do.

            Google up the difference competitive and anti-competitive. Of course MS are a corporation - but there are some things you can do to make money, and some things you cant.

            One of those things you cant do is engineer ways to prevent competitors from making a better product. That is a bad thing because then the top dog won't have any reason to innovate. Hence we have IE6

            (I'm pretty sure there's no one on any side of these markets that would turn away market share).

            To assume corporations are faceless and any company would do what any other company would do is just silly. If that were the case, then corpora

          • by Divebus ( 860563 )

            Actually, they murdered them for competition

            That's closer to what I remember, but it was more than competition (or power), it was survival. Microsoft recognized this Java stuff running on Netscape had the potential to obviate the Windows operating system. Since Microsoft couldn't counter this technology with something more compelling than "write once, run anywhere", the best they could do was partner with Sun, then become a bull in the china shop to destroy Java from the inside.

        • Re:Princi-what? (Score:5, Insightful)

          by Divebus ( 860563 ) on Sunday February 22, 2009 @02:40PM (#26950647)

          And Java is as far from dead as possible.

          Only through the force of programmers who eventually detected what Microsoft was up to. Please yip in if you have experience in this era of Visual Studio 97 and Visual Studio 6.0 and what it meant to polluting Java.

          Initially, Microsoft "partnered" with Sun to embrace and develop Java. They released Visual Studio which included tools to work with Java - on Microsoft's terms. Sun quickly realized that Microsoft was targeting the Java language and the JVM for destruction and sued. Microsoft was extending Java to include Windows-only system calls, violating the agreements.

          By the next year (1998), Microsoft was ordered to stop producing tools which used Sun's Java - but they continued with their own implementation (J++) which essentially extended Java but stripped away all the cross platform functionality. That was a knife in Java as intended - write once, run anywhere. By that time too many developers were using Microsoft's tools and they went along for the ride.

          This is why so many people run the other way when Microsoft wants to get on board the Open Source bandwagon. Your throats are scheduled to be slit next.

          • You're absolutely correct.

            Luckily I had enough insight to toss Visual Studio 6 and J++ out the window. What a load of crap that was! Horrible IDE!

            Sun has done remarkably well on the server end. If you actually look into what most sites are running on, most of the big sites, government sites, and sites with great uptime are all powered by java.

          • Your comment was exactly what I wanted to say with my sentence. I thought all this would be well-known around here.

            But good that you wrote what I did not have the energy to write. :)

        • by ady1 ( 873490 ) *

          There was no stupidity in their behavior.

          There was no point in adding features since they already destroyed netscape and essentially, won the browser war.

          Can't think of a decent car analogy for this one.

          • Can't think of a decent car analogy for this one.

            How about when Henry Ford's cheap new gasoline powered vehicle literally drove the electric cars of the day off the road? One hundred years later and cars go only about twice as fast as they did back then, carry the same number of people, and cost a greater percentage of the average yearly income? Not to mention the environmental impact and the geo-politics of having one major energy source. Sure, they're more comfortable, but where's my fraking fusion-power

            • replace electric car with horse and buggy and it is closer.

              Electric cars have a major fatal flaw, power. you need a light weight yet high density power source to make up the fact that gasoline is a light weight high density power source. Electric cars are nice, but without a decent power source don't have a fraction of the range of Gas car of the same size.

              Also flying cars are nice in concept until you realize the truth. most of the drivers on the road today can barely handle a car that only can go in t

        • It's a cookbook!

        • No. They tried to murder them for power. Pure power. IE was the one browser to rule them all. Fortunately they were too stupid to do anything useful with that power. They only saved the money to continue developing their web developer torture instrument called IE

          Netscape committed suicide through incompetence. Compared to Netscape 4, IE was - and still is - a far superior browser. So is Netscape 3, for that matter, and in fact I'd use Lynx before touching N4 ever again. Hard as it might be to believe, in t

          • Re: (Score:3, Informative)

            by Hurricane78 ( 562437 )

            I'm sorry, but did you actually use Netscape 4 and IE 4??

            I did. I even programmed in them. And hell, all the cool features did not work in IE!

            DHTML? JavaScript? They were in the same horrible state as they are today.

            And IE did not even have a mail client, calendar, or anything else.

            I used Opera in the time between Netscape 4.51 died and Mozilla/Firefox got fast enough and had enough applications to use it for more than development.

            They did win for one simple reason: They gave their browser away with their o

  • Does it really (Score:2, Insightful)

    by Bromskloss ( 750445 )

    ...have to be this complicated?

    • Re:Does it really (Score:5, Informative)

      by digitalunity ( 19107 ) <digitalunityNO@SPAMyahoo.com> on Sunday February 22, 2009 @01:31PM (#26950099) Homepage

      Highlights:

      • MS admits IE8 isn't secure.
      • Initial latency on named pipes is poor.
      • .NET based image serialization performance is poor.
      • Gazelle's plugin architecture will require software publishers to rewrite most of their plugins.
      • Using separate processes to render content on a single page causes significant latency due to process creation overhead.
      • Re:Does it really (Score:5, Interesting)

        by harry666t ( 1062422 ) <harry666t@nosPAm.gmail.com> on Sunday February 22, 2009 @01:55PM (#26950315)
        > process creation overhead

        Why does Windows have so much more overhead for creating processes? What is it about the Windows processes that makes them cost that much?
        • by isorox ( 205688 ) on Sunday February 22, 2009 @02:11PM (#26950435) Homepage Journal

          What is it about the Windows processes that makes them cost that much?

          License fees?

          The kernel has to ensure processes are obeying any DRM and WGA restrictions

        • Re:Does it really (Score:5, Informative)

          by beuges ( 613130 ) on Sunday February 22, 2009 @02:12PM (#26950445)

          Same reason that thread creation is cheap in Windows but expensive in Linux - different designs to suit different usage methodologies. In the *nix world, its very common to fork off new processes to deal with tasks, whereas in Windows, the trend is to keep everything within the same process, with multiple threads handling various tasks. Either methodology will work in either OS, and Microsoft could redesign Windows to favour processes instead of threads, and Linus et al could redesign Linux to favour threads instead of processes, but due to the way the OS's are currently used, it would be pointless.

          • Re: (Score:3, Informative)

            by speedtux ( 1307149 )

            Thread creation in Linux is not expensive.

            • probably old info (Score:5, Informative)

              by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Sunday February 22, 2009 @03:07PM (#26950873)

              Linux threads were relatively heavyweight in early implementations, just about as much so as processes; the current implementation is much lighter weight. So some books still floating around contain that info, since it used to be true.

              A sort of separate issue is that, for a variety of reasons, most Linux distros on x86 ship with a default 8MB pthread stack size, which is fairly high--- spawning a mere 50 threads gets you a nice 400MB of control stacks. You can set the stacksize smaller with pthread_attr_setstacksize, and the unused parts of those stacks can mostly live harmlessly in non-resident virtual memory, but it still makes threads seem heavier weight than they ought to seem.

          • Re: (Score:3, Informative)

            by ady1 ( 873490 ) *

            To add to this, threads are considered to be inexpensive in terms of RAM usage. Historically windows was designed for smaller computers with little amount of RAM.

            Looking back its almost comical to think how much RAM each of MS OSes required. Although the architecture has significantly changed from windows 95 to windows nt/2000/xp, the requirement to make programs designed to work on older OSes kept the threading mechanism almost the same and therefore, more thread friendly environment.

          • no see my earlier posting on this subject: the use of Security Descriptors and potential checking against the PDC is what makes process creation expensive, which then makes _thread_ creation so cheap in NT, by comparison. ... you can't really secure threads from each other, so why bother, basically, was the general attitude that can clearly be seen to have been taken.

        • Re:Does it really (Score:5, Informative)

          by lkcl ( 517947 ) <lkcl@lkcl.net> on Sunday February 22, 2009 @02:59PM (#26950803) Homepage

          short answer: the ACL-based security model, which is transparently networked onto "NT Domain Security".

          the design comprises:

          * the evaluation of the security descriptor, which is a binary blob that needs to be decoded

          * the creation of a process, where the parent has a security descriptor "inheritance" chain to its parent, to its parent etc. etc.

          * the possibility for evaluating an individual ACE that could be on a remote machine (a PDC)

          * just the _possibility_ of having to contact the remote machine (the PDC) leaves a design where the creation even of a local process requires the use of MSRPC (on "local rpc" pipes - ncalrpc) in order to not drastically overcomplicate the code any more than it already is.

          goodness knows what else is going on, but it's very very powerful but unfortunately with that power and flexibility of design comes a whopping great overhead.

          and no you can't cache the results very much because someone might revoke a user's right to CREATE_PROCESS and they'd get a bit unhappy about that not being obeyed.

        • Re: (Score:3, Interesting)

          Comment removed based on user account deletion
        • by MrMr ( 219533 )
          DRM
        • Re: (Score:2, Interesting)

          by djelovic ( 322078 )

          Windows thread creation costs more than Unix thread creation because it does more. Whether that work is useful to most people is somewhat dubious.

          Windows kernel is roughly based on VMS, which at the minimum has a different security model than Unix. The one in Windows is finely-grained, while the Unix one is fairly coarse.*

          In addition, a bunch of things in Windows have thread affinity and that has to be set up too. The concept of thread affinity for things like windows is pretty good for a desktop OS, fairly

      • Re:Does it really (Score:5, Insightful)

        by CodeBuster ( 516420 ) on Sunday February 22, 2009 @02:15PM (#26950461)

        Using separate processes to render content on a single page causes significant latency due to process creation overhead.

        It reminds me of the practical problems that were encountered in the Mach kernel [wikipedia.org] implementations and which, despite great initial interest and subsequent effort, were never satisfactoraly resolved. In fact, many have concluded that the concept of independent kernel process cooperating via message passing, regardless of the tasks that they are attempting to perform, is inherently slower than single process monolithic designs and although object orientation allows greater flexability and abstraction it is always paid for in raw performance. In many cases, and particularly in user space application software, the price is worth paying. However, it turns out that OS kernels are probably NOT one of those cases. I would be highly skeptical that Microsoft has found a way around the performance problems that the Mach people missed when it comes to a "multi-prinicipal browser" operating system. In fact, it is more likely that this is yet another case of Microsoft leveraging monopoly power in the OS market to answer the renewed threat on the browser front and "cutt off the oxygen supply" of mozilla, opera, and other competing browsers.

        • Re: (Score:3, Interesting)

          by Anonymous Coward

          No, Mach had two problems.

          First and foremost, messages were not idempotent, and while the system allowed for reentrancy, it did not allow for at-most-once processing of multiple identical messages. Among other things this complicated locking and diminished locality of reference, which has grown important in the presence of hierarchical memories and non-uniform access times in multiprocessor systems and clusters.

          This problem is fundamental and architectural in Mach, but it is not to message-passing microk

      • Is initial latency on UNIX pipes poor?

        • Good question, I'm not sure. Probably could be definitely answered by the real time kernel developers, but I don't know of anything published saying yay or nay.

    • Re:Does it really (Score:5, Informative)

      by lkcl ( 517947 ) <lkcl@lkcl.net> on Sunday February 22, 2009 @01:32PM (#26950105) Homepage

      i've done event-driven vehicle simulators; i've clean-room network-reverse-engineered MSRPC and NT domains protocols; i've ported freedce to win32; i've added glib bindings to webkit and on top of that, ported a port of GWT to python even _more_ into python by adding DOM manipulation to pywebkitgtk.

      in amongst all that mindless drivel of alphabet soup you should be getting a pretty clear picture that i'm not a stranger to complexity.

      i've learned that if someone says "surely it doesn't have to be as complicated as all that", it's time to run like stink as fast as possible, out of the conversation and the room, and never look back.

      browsers are effectively desktop technology within a desktop (and damn good at displaying widgets), except you're letting the web site dictate what "programs" are allowed to be "run" on your desktop^H^H^H^H^H^H^Hbrowser.

      browsers are no longer "just HTML displayers", they are actually executing applications - _real_ applications - that in many instances happen to be written in javascript. GWT [google.com], Pyjamas [pyjs.org] and RubyJS [rubyforge.org] should all hammer that point home.

      with that in mind, why is it so hard to then imagine that, given that the "browser" is doing everything that you can also do with desktop widget UI toolkits, why is it so hard to appreciate that you need the full range of OS technology to support that desktop^H^H^H^H^H^H^H^Hbrowser technology?

      • Re:Does it really (Score:5, Insightful)

        by obarthelemy ( 160321 ) on Sunday February 22, 2009 @01:50PM (#26950257)

        Basically, since the browser already runs on top of an OS, the surprising thing is that they want to reimplement another OS within the browser.

        I assume that OS could run a browser which could run an OS which could... Do we really want that ? Why ?

        • Re:Does it really (Score:5, Insightful)

          by pyrbrand ( 939860 ) on Sunday February 22, 2009 @02:45PM (#26950689)

          The main issue right now is that a given web page often displays information from separate sources. The classic example at this point is that if I want to display ads on my web page, I have to bring in content from another source, and I essentially have to trust that content not to do tricky things with JavaScript to muck with my page - you know, display obnoxious, or worse, spoof UI, scrape user data, attack a browser vulnerability, all sorts of nastiness. Ads aren't the only example of this, the same is true of mashups ala housingmaps.com etc.

          Relying on the OS is essentially what this paper is proposing as far as I can tell. They suggest that each part of a page that is relying on a different source for its content be sandboxed in its own process. However, doing this requires changes to the browser since current browsers don't do this (although Chrome and IE8 do work to isolate each tab in its own process). There are other proposals out there in the wild such as Web Sandbox discussed recently: http://tech.slashdot.org/article.pl?sid=09%2F01%2F28%2F188254&from=rss [slashdot.org] , which takes a different approach (sanitizing javascript for badness and restricting its access to the main page).

        • i always wanted to write my own desktop, like webos or the example/demo that comes with extjs, using browser-based technology. then i can throw away all the silly desktops i never liked anyway, and run all my applications from inside the web browser. and, because i know that the browser technology is actually an OS, i know it's secure and also will have process-separation so that one app crashing won't take out my entire quotes browser quotes. hooray!

      • Re:Does it really (Score:5, Informative)

        by Vellmont ( 569020 ) on Sunday February 22, 2009 @01:52PM (#26950283) Homepage


        i've learned that if someone says "surely it doesn't have to be as complicated as all that", it's time to run like stink as fast as possible, out of the conversation and the room, and never look back.

        So you've never encountered a situation where someone added complexity because they couldn't see a simpler way to do something? I sure have. Dismissing the idea that something is too complicated and could be made far simpler out of hand simply seems wrong to me.

        why is it so hard to then imagine that, given that the "browser" is doing everything that you can also do with desktop widget UI toolkits, why is it so hard to appreciate that you need the full range of OS technology to support that desktop

        I could see a case for it. I could also see a case for doing it WITHOUT modifying the full range of OS technology. Why is it so hard to see that a secure browser could be done using existing operating systems?

        • Re:Does it really (Score:4, Interesting)

          by UnderCoverPenguin ( 1001627 ) on Sunday February 22, 2009 @02:11PM (#26950437)

          Why is it so hard to see that a secure browser could be done using existing operating systems?

          My quess would be that is it more palatable to call something completely new more secure than anything we currently have than it would be to concede a competitor is more secure (even if you are not MS).

        • by lkcl ( 517947 ) <lkcl@lkcl.net> on Sunday February 22, 2009 @02:49PM (#26950719) Homepage


          why is it so hard to then imagine that, given that the "browser" is doing everything that you can also do with desktop widget UI toolkits, why is it so hard to appreciate that you need the full range of OS technology to support that desktop

          I could see a case for it. I could also see a case for doing it WITHOUT modifying the full range of OS technology. Why is it so hard to see that a secure browser could be done using existing operating systems?

          sorry, i assumed it would be clear. applications running within the browser are becoming more like _real_ applications - _real_ "desktop" applications, especially with downloadable-executable-code ( "plugins" such as as adobe ) having been thrown into the mix.

          and you have multiple of "applications" running simultaneously.

          therefore, you have security implications, application stability implications, and much more [i recently had firefox crash out-of-memory on linux, and i have 2gb of ram and 3gb of swap space].

          therefore, you need to start looking at isolating the applications from each other, whilst also allowing them access across a common API to a central set of protected resources (screen, keyboard, mouse, other devices, memory, networking), to be able to communicate across that boundary without impacting any other applications or the central resource management layer itself.

          and i think you'll find that if you look closely, that's pretty much the definition of an OS.

          so, working from the requirements - the expectation that good, hostile, rogue or simply badly designed applications all need to be given a chance to run, you arrive naturally at the rather unfortunately-logical conclusion that the only decent way to fulfil the requirements is with an actual full-blown operating system.

          to believe that anything else can fulfil the requirements, to provide multi-tasked application stability and security, really is sheer delusion, or is... like... expecting a 1980s apple mac OS with a 68000 CPU and no Virtual Memory support, to be "secure". ... actually, there _is_ one other possibility: Security-Enhanced Linux (specifically, the FLASK security model behind SE/Linux). and we know what people think of _that_, despite SE/Linux being incredibly good at its job.

          • by Orne ( 144925 )

            Do you see the Web-Browser-As-OS implemented as a virtual machine capable of running inside another operating system?

      • ***why is it so hard to appreciate that you need the full range of OS technology to support that desktop^H^H^H^H^H^H^H^Hbrowser technology?*** And the result is not going to be a security nightmare? I'm wrong sometimes, and I haven't really understood an OS since about 1966. But complicated almost certainly means lots of exploits and defects. I'm betting that handing over complete control of PC resources to a sociopathic teenager in Misnk will not end well in many cases.
      • i've done event-driven vehicle simulators; i've clean-room network-reverse-engineered MSRPC and NT domains protocols; i've ported freedce to win32; i've added glib bindings to webkit and on top of that, ported a port of GWT to python even _more_ into python by adding DOM manipulation to pywebkitgtk.

        in amongst all that mindless drivel of alphabet soup you should be getting a pretty clear picture that i'm not a stranger to complexity.

        i've learned that if someone says "surely it doesn't have to be as complicated as all that", it's time to run like stink as fast as possible, out of the conversation and the room, and never look back.

        "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction." -- Albert Einstein

      • with that in mind, why is it so hard to then imagine that, given that the "browser" is doing everything that you can also do with desktop widget UI toolkits, why is it so hard to appreciate that you need the full range of OS technology to support that desktop^H^H^H^H^H^H^H^Hbrowser technology?

        You do need the full range of OS technology, you just don't need to re-implement it. You don't need to reimplement it because it is the purpose of operating systems to provide this functionality to application program

      • I'm going to guess that you were never asked to document your work.
    • Re: (Score:2, Insightful)

      by Nakoruru ( 199332 )

      I have two answers.

      The snarky answer is that when one writes a paper one has to make simple things sound as complicated as possible in order to make the paper look like you've discovered something interesting.

      More likely it really does have to be this complicated considering that handling security when combining content from multiple sources cannot be made simple unless you make it trivial (no trust or complete trust).

    • ... that guy [failblog.org]?

  • by NotQuiteReal ( 608241 ) on Sunday February 22, 2009 @01:20PM (#26950001) Journal
    I was told my browser can't be trusted to read PDF fils.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      Your spell checker is broken as well.

  • Dear MS, (Score:5, Insightful)

    by BitZtream ( 692029 ) on Sunday February 22, 2009 @01:31PM (#26950097)

    If you can't secure your basic OS, why exactly do you expect me to believe, or in fact even read a paper you wrote about a domain in which you absolutely suck?

    • MS Research are not the ones behind the production operating systems. That's like refusing to program in C because your phone line's unreliable.
    • Re:Dear MS, (Score:4, Informative)

      by Anonymous Coward on Sunday February 22, 2009 @01:53PM (#26950293)

      This is a paper co-authored by security researchers from MS *Research*, UIUC, and UWash. It is *not* a white paper let alone some kind of release announcement from MS. Security for web browsers in light of Web 2.0 technology is a major research topic, and I've seen a number of papers which propose similar ideas. What happens at MS Research (which has some darn good scientists) does not have to and often doesn't make it into a MS product. For example there is a lot of impressive research on privacy done by Cynthia Dwork at MS Research: haven't seen it or heard of it being implemented or even considered for implementation.

      So, chill out - this is a research paper, not news about MS's new browser.

    • Do you always refuse to believe that something could be right and true because of your own bias against the person or persons who are communicating it? Do you always succumb to the fallacy of ad hominem? It seems to me that anyone willingly blinding themselves to new information due to its source is condemning themselves to denying truth for the rest of their lives. Grow up.
      • by blueZ3 ( 744446 )

        The counter point to "...blinding themselves to new information due to its source..." is: "Those who refuse to learn from history (or in this case, past experience) are doomed to repeat it (or get screwed again)."

        Or more succinctly: fool me once, shame on MicroSoft. Fool me twice, shame on me.

  • by zappepcs ( 820751 ) on Sunday February 22, 2009 @01:34PM (#26950125) Journal

    Grammar problems aside, TFA blurb is difficult to read and talks about MS offering a web browser that is an OS Kernel.... that is secure... and backward compatible!

    I can only conclude that this website has been hacked, and this is a huge joke. Seriously, this sounds like MS PR machine trying to pour salt directly in the wounds of the boardmembers, or this was written by a person suffering delirium after being hit in the head by a flying chair. Well, perhaps it's just MS Marketing department trying reverse psychology?

    In any case, it's rather surreal to read those words.

    I'm off to check that there are no foreign substances in my coffee.

    • Microsoft has to have something to sell, and as they have in the past, selling you *another* OS is not out of the question.

      And even if they are not new-product ready and profitable, I think it would be even more financially urgent to attempt adding complexity to the current technology mix to hold them over until they do. New browser, methods, new development envs., IDE's, New Serverxxx w/extensions, SPs, patches, everything that keeps their juggernaut running.

      • I agree that this has been their past mode of operations, but in view of the rising popularity of F/OSS I don't think it is going to get them anything but a splendidly memorable bad day on the stock exchange. How many bad products do they have to try to launch before investors begin asking "WTF were you thinking?"

        Now don't confuse this with MS bashing. It's not. I'm not talking about how much better other things are compared to MS, this is only about MS. I genuinely don't see how they are going to pull this

  • Virtual Machine (Score:3, Interesting)

    by nurb432 ( 527695 ) on Sunday February 22, 2009 @01:51PM (#26950273) Homepage Journal

    Stick a full VM into the browser. Problem solved. Except of course for the huge resources needed to view even the simplest of pages.

    The entire push over the last few years to transferring processing load back onto the client is the wrong direction in my opinion, and the browser should remain a THIN client like the original intent. Keeping it a thin client by nature would be secure.

    • The entire push over the last few years to transferring processing load back onto the client is the wrong direction in my opinion, ...

      I agree.

      While I see the motivation for doing so, I see far more websites needlessly using JS, Java or Flash, thus requiring enabling scripting for no good reason.

    • Easier solution: Put the browser in a VM sandbox that drops all changes to the filesystem once you're done. That's actually something an OS should support: Executing a non-trusted image in a VM. Somehow I think that should not be too hard with KVM but I haven't read enough about it.

    • Stick a full VM into the browser. Problem solved. Except of course for the huge resources needed to view even the simplest of pages.

      The entire push over the last few years to transferring processing load back onto the client is the wrong direction in my opinion, and the browser should remain a THIN client like the original intent. Keeping it a thin client by nature would be secure.

      noooo, nonono can do - yes it would be secure, but times have changed _drastically_. what's happened is that as the desktop wars got ridiculous (and i don't just mean between different OSes, i also mean between win95, xp and up), people simply moved to the browser itself to provide access to applications. all the talk of "ubiquitous computing" has actually _happened_.

      and, as the expectations of web infrastructure got ever greater, that origial "thin client" architecture began to look... well... thin! so

    • This is my thinking as well. The original web browser model with its clear decoupling of responsibility between server and client is what makes the web incredibly attractive as an application platform.

      Contrast this with all the clunky alternatives that tried to build elaborate communication and presentation layers in which this decoupling was not clean and not portable. In retrospect, did we really need to implement distributed objects in the vast majority of cases? Apparently not, because suddenly ev
  • by RichMan ( 8097 ) on Sunday February 22, 2009 @01:53PM (#26950301)

    Thought #1:
    Microsoft forced the registry, DLL hell, and activeX on the world when they started with a really the nice VMS security model as the basis for NT.

    Thought #2:
    Java is an application language with structured layered protections. And Java is pretty much now an open standard and embedded in modern browsers.

    Summary:
    Sure the idea is right. Why don't we all just work on making Java better?

    Caution:
    From Microsoft this message sounds like a joke. They fought against Java and invented all that other crap that led to the creation of the Viris protection industry. If they had done it right 10 years ago we would not be here now.

    • Re: (Score:2, Insightful)

      by magamiako1 ( 1026318 )
      #1. Registry is fine. What about "library hell" and "dependency hell" that other operating systems have? or "conf hell"? There are many "hells" we can talk about that exist in all systems. It's the complex nature of how the applications work.

      #2. Java is not embedded in modern browsers. You need to download an extra java client to run java applications. If you're talking about javascript, that is a different story.

      #3. Viruses predate Microsoft's modern operating systems. First virus/worm: The Creeper virus w
      • Re: (Score:3, Insightful)

        #1. Registry is fine. What about "library hell" and "dependency hell" that other operating systems have? or "conf hell"? There are many "hells" we can talk about that exist in all systems. It's the complex nature of how the applications work.

        The registry is a horrible idea, you make one mistake in the registry and your computer might not boot. At least with the configure file system, you can screw up a lot and you will still be able to boot at least into recovery mode

      • I'm a Linux user, got introduced to it in around '96 and started using it a fair amount in '99. Never experienced library hell or dependency hell.

        When I started, your distro would give you a bare system, and everything else was a download, "gzip -cd source.tar.gz | tar -xf -", "./configure" and "make install" away.

        If you were missing a dependency, or had a version that was too old for the software you wanted to install, configure would stop and tell you which library was missing. At which point you simply d

        • I'm a Linux user, got introduced to it in around '96 and started using it a fair amount in '99. Never experienced library hell or dependency hell.
          .
          .
          As I understand it, Windows/.NET are the only platforms to speak of which suffer from these problems.

          And over the same period, what DLL Hell have you encountered or heard of? Sure, back in the 16bit days, DLLs were loaded into the same memory address space to save memory. So even if they were stored in different folders, two different versions of a DLL could not be loaded at once. 32bit and 64bit DLLs do not suffer from this problem.

          While I haven't personally seen the problem come up in any OS from this decade, it hasn't completely eliminated the potential to go wrong. So over the years features were added

      • Re: (Score:3, Informative)

        by RichMan ( 8097 )

        > #1. Registry is fine

        Nope. Bill Gates says it is crap.

        http://blog.seattlepi.nwsource.com/microsoft/archives/141821.asp

        "Someone decided to trash the one part of Windows that was usable? The file system is no longer usable. The registry is not usable. This program listing was one sane place but now it is all crapped up."

  • the short version .. (Score:3, Informative)

    by viralMeme ( 1461143 ) on Sunday February 22, 2009 @01:59PM (#26950349)
    "Browser Kernel runs in a separate OS process, directly interacts with the underlying OS, and exposes a set of system calls for browser principals. We draw the isolation boundary across the existing browser principal1 defined by the same-origin policy (SOP) [34], namely, the triple of , using sandboxed OS processes"

    Run the OS in a separate process using a restricted set of system calls and sandbox from the rest of the system. In other words don't do what we did with Internet Explorer and embed it into the core OS kernel.
    • Re: (Score:3, Insightful)

      by magamiako1 ( 1026318 )
      My question to you is what parts of Internet Explorer were "embedded into the kernel", and more importantly, what exploits and viruses/worms have access to the "kernel" of the operating system through IE.

      I'm no Windows kernel expert, but if you are I'd love to learn some more.

      Most of the problems I've seen with IE have more to do with users installing ActiveX applications rather than flat browser exploits. While browser exploits do exist and are important to guard against, a vast majority of problems that e
      • Best if you ask Microsoft about that. Officers of the company testified in court that the browser was so intimately linked to the system that it could not be removed.
  • by viralMeme ( 1461143 ) on Sunday February 22, 2009 @02:25PM (#26950535)
    "Process models 1 and 2 of Google Chrome are insecure since they don't provide memory or other resource protection across multiple principals in a monolithic process or browser instance. Model 4 doesn't provide failure containment across site instances [32].

    Google Chrome's process-per-site-instance model is the closest to Gazelle's two processes-per-principal-instance model, but with several crucial differences: 1) Chrome's principal is site (see above) while ">Gazelle's [slashdot.org] principal is the same as the SOP principal
    "

    " Chrome's decision is to allow a site to set document:domain to a postfix domain (ad.socialnet.com set to socialnet. com). We argue in Section 3 that this practice has significant security risks. 2) A parent page's principal and its embedded principals co-exist in the same process in Google Chrome, whereas Gazelle places them into separate processes"

    " Tahoma doesn't provide protection to existing browser principals. In contrast, Gazelle's Browser Kernel protects browser principals first hand "

    Classic bait and switch, compare Chrome running on Windows to Gazelle running on some imaginary secure other OS. MS.memo: Googles Chrome is eating our lunch, quick rush out a 'research paper' trashing it, and pretend Chrome is playing catch-up with Gazelle. Like, if Chrome was so bad, then why expend time in criticizing it.
  • The Virtual Machine!! What's the patent number on this one?

  • This boys and girls is what happens when one starts with a shitty OS and tries to make up for it on the browser (a la IE) or in the virtual machine (a la JVM).

    An OS with a solid security model doesn't require all of these kludges. The sad reality is that the three dominant OSes in use considered security an afterthought, and yes that includes UNIX.

    I'm going to sound like an old fogie, but back in my day any one could bring down an entire Unix system by simply typing the right stty combination, or one could

news: gotcha

Working...