Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Microsoft Software

Microsoft Research Showcases New Browser Prototype, "Gazelle" 125

Ars Technica reports that Microsoft has opened up about "Gazelle," a new browser prototype of theirs that is modeled after the underlying concepts of operating system design. "A research team led by Microsoft's Helen Wang recently published a report about an experimental browser prototype called 'Gazelle' that uses processes to isolate page content elements originating from different domains. It builds on the concept of multiprocess browsing but uses more fine-grained isolation to expand on the security advantages that are already delivered by existing multiprocess browsing models. But is it an operating system, Microsoft Research's analogue to Google's Chrome OS? Not quite."
This discussion has been archived. No new comments can be posted.

Microsoft Research Showcases New Browser Prototype, "Gazelle"

Comments Filter:
  • by Anonymous Coward on Friday July 10, 2009 @01:51PM (#28653299)

    Oh boy! A more secure browser from the pros at secure browsing, Microsoft.

  • New Focus (Score:5, Interesting)

    by Haffner ( 1349071 ) on Friday July 10, 2009 @01:51PM (#28653301)
    Microsoft focusing on developing a browser-based OS is directly opposed to their current business model, which involves forcing users to purchase an operating system. Microsoft's focus has always been on for-pay, offline applications. Taking a precautionary foray into Google's future business model seems to show that they are at the very least wary of Google's future plans.
    • by maxume ( 22995 )

      You are wildly mischaracterizing what Microsoft Research does. This isn't a new direction or focus for the company, it is a paper about some ideas.

    • by tonyr60 ( 32153 )

      Microsoft focusing on developing a browser-based OS is directly opposed to their current business model, which involves forcing users to purchase an operating system.

      But not opposed to their business model of disrupting competition by any means possible. Significant competitor comes up with browser/OS combination, so lets tell the market that we are doing just that. Should keep the builders of those pesky netbook thingies from drifting away again.

  • Not an improvement (Score:5, Interesting)

    by Todd Knarr ( 15451 ) on Friday July 10, 2009 @01:55PM (#28653349) Homepage

    MS's idea is nice, but it's not going to help a lot of things very much. It'll help when plug-ins and helper apps go runaway, being in a separate process they won't be able to block the browser itself. But from a security standpoint the problem isn't that those embedded objects are in the same process, it's that they have access to the same page and the DOM elements in it and the data structures of the browser itself. And that won't be solved just by putting them in their own process, not without isolating them from the rest of the page and browser to a degree that'll break a lot of Microsoft's technologies.

    • by Kupfernigk ( 1190345 ) on Friday July 10, 2009 @02:04PM (#28653493)
      Some of Microsoft's technologies - like data files that can execute code - need to be broken. It is sometimes necessary to sacrifice convenience for a degree of security. The personal computer industry has been slowly coming to terms with this for the last 10 years or so, it would be nice if we don't have to wait another 10 before it all works properly.
      • If you're going to wait for an end to Office automation you'll be waiting for a long time.

        Anyone else see the irony of this comment on a page full of AJAX tech by the way?
      • Some of Microsoft's technologies... need to be broken.

        Hey - you're in luck!

    • it's that they have access to the same page and the DOM elements in it and the data structures of the browser itself.

      Well, if that isn't convincing enough, threads usually inherit the permissions of the process creating it. So multi-threading only manages to increase complexity, without improving security. Any exploit made in the child process/thread can use those permissions to access the parent. It's not even a privilege escalation attack, because no permissions are changed to do it. Sad. Very sad.

  • by Jugalator ( 259273 ) on Friday July 10, 2009 @01:55PM (#28653353) Journal

    After reading that article, I'm much less excited than I was. I had assumed it was something similar to Google Chrome OS, but it's not even something that seems like it turned out very well for Microsoft, or something that can have yet undiscovered major issues on the horizon. The idea seems to have turned out overly complex to work around the limitations with the approach, and all that in a resource hungry .NET application. It says they're hopeful to get the per-tab RAM usage down from 16 MB, but I have to wonder by how much? This approach doesn't seem much better than running a process-separated browser written in Java. Ugh.

    I have a hard time understanding the decision to use .NET, but perhaps it was a security decision? Anyway, it doesn't sound like the optimal choice, when the project all revolves around low-level features like isolating the tabs even further.

    • by LO0G ( 606364 )

      I actually think it's closer to the OP browser [uiuc.edu] than it is Chrome OS.

    • by PsychicX ( 866028 ) on Friday July 10, 2009 @02:18PM (#28653717)
      But it's not supposed to be a product! MS --> Research --. It's an architectural experiment, and sure, a lot of projects graduate from Research to an actual product group. The goal is NOT to make something you can take to the open market though. It's a proper research lab, and so of course its stuff is frequently lacking. If it were to be converted to a product, it'd be staffed up with a full team who would spend a year or two -- or seven in the case of some unfortunate victims -- making it viable for public consumption.
      • But sometimes, MS Research's projects do become actual products. Here's a famous example: link [youtube.com]

        With a track record including successful products like this, I'm sure this MS Research project is sure to be a giant and resounding success.

    • by Colonel Korn ( 1258968 ) on Friday July 10, 2009 @02:49PM (#28654067)

      After reading that article, I'm much less excited than I was. I had assumed it was something similar to Google Chrome OS, but it's not even something that seems like it turned out very well for Microsoft, or something that can have yet undiscovered major issues on the horizon. The idea seems to have turned out overly complex to work around the limitations with the approach, and all that in a resource hungry .NET application. It says they're hopeful to get the per-tab RAM usage down from 16 MB, but I have to wonder by how much? This approach doesn't seem much better than running a process-separated browser written in Java. Ugh.

      I have a hard time understanding the decision to use .NET, but perhaps it was a security decision? Anyway, it doesn't sound like the optimal choice, when the project all revolves around low-level features like isolating the tabs even further.

      Read the article a bit more and you'll discover that the purpose of this project was to find the limitations of taking the separate process model to an extreme, with every element on a single page living in its own process. This was low level research, not an attempt to spark a new product.

    • How would you think it was related to Chrome OS?

      I came here expecting to ask why they added this line But is it an operating system, Microsoft Research's analogue to Google's Chrome OS? Not quite. but you had already made this comment. Why would a browser prototype be like an operating system?

      And.. using .NET more has always been their plan, to convince people you can write useful apps in it. I thought Vista was supposed to have large portions written in .NET - right along with the database-like filesyste

      • Exactly. Sadly some people now think an OS that can do nothing but run a browser is the holy grail of operating systems. (Chrome OS will probably do a bit more than run a browser, admittedly. And then some day a person will write a website that emulates a desktop (probably already done). Performance might be better than anything else, but the feature set will be low. From that webpage users will eventually be able to launch other webpages! And then someone may even figure out a way to launch some C++

  • I am far from an expert on browsers so I am genuinely curious to know if this will keep the browser from occasionally "stalling". Will it allow the browser to keep downloading/processing the parts of the web page that it can even when certain elements are unavailable? If so, sounds like a good advance!

  • Standards (Score:2, Insightful)

    by doishmere ( 1587181 )
    IE doesn't support web standards, but people still use it because it has Microsoft's name. Maybe this will get people to switch to a (hopefully) standards-compliant browser.
    • People still use IE because its icon is almost always present on the desktop, and it has "Internet" written under the icon. Such an icon draws inexperienced user's mouse pointers like a light bulb draws moths, especially compared to a FF icon. "FireFox? What does that have to do with the internet?"
      At my workplace I even removed all the IE icons I could find, only to be complained at that "I can't find the internet any more!".
      • Try making a link to FireFox using the IE icon. Some of the users I've done this for never even noticed the difference.
    • by leuk_he ( 194174 )

      This was cerntain;y true for explorere 5 , 6 and somewhat explorer 7. But for explorer 8 they were brave enough to break not compliant sites.

  • by girlintraining ( 1395911 ) on Friday July 10, 2009 @01:59PM (#28653429)

    It builds on the concept of multiprocess browsing but uses more fine-grained isolation to expand on the security advantages that are already delivered by existing multiprocess browsing models.

    That's a new definition of security of which I was previously unaware. Just about anyone who's spent five minutes trying to do multi-process, multi-thread, unsyncronized accesses, cloud, spin-locks, etc., will tell you that no, there are no inherent security advantages. It'll be less secure unless you make a dedicated effort from project start just to keep it on par with single-threaded. The only "advantage" it has is that when it fails it'll crash more slowly, with a wider variety of obscure error messages, hammering the operating system as it tanks with the extra overhead as it does so. Yes, it might be slightly harder to develop an exploit because it's not using a generic flaw, but some complicated and obscure flaw -- but that's not more secure; Only badly designed.

    • by Tanman ( 90298 )

      It's more secure because if one browser instance gets compromised, you don't automatically compromise all of the data currently in other browser instances.

      At least that's my understanding. I could just as easily be wrong.

      • Bullshit.

        It is not more secure 'just because'.

        It CAN be more secure, but as history shows, pretty clearly, the more code you add, the less likely it is to be secure.

        You don't get automatic security when you have multiple processes unless they are completely and in every way unable to do anything to the other. Since multiprocess browsing results in using shared memory, IPC, and even sharing files on disk, there is pretty much no reason what so ever to assume that multiprocess is more secure.

        Once you break o

    • by Cyberax ( 705495 )

      Yes. There ARE inherent advantages.

      You can make RPC protocols with time-outs, so a supervisor process can kill offending child tab. Of course, you'll need to design your protocol to support timeouts, but I assume that Google and Microsoft developers are not stupid.

      Next, you can easily isolate a process. Operating systems are GOOD at that, especially Linux with SELinux, seccomp and other _kernel-level_ sandboxing solutions.

      Empirically, Google Chrome which also uses multi-process approach works _faster_ than

    • Re: (Score:1, Informative)

      by Anonymous Coward

      It is true that our current abstractions for concurrency suck. However, the benefit of a multi-process (not multi-thread) abstraction is address space isolation. Namely, when your browser tab or plugin goes off scribbling in memory, every other browser tab keeps on chugging along without getting corrupted. As you pointed out, we still need good designs for exchanging data between processes. But now we can design much smaller interfaces around narrow, inter-process communication channels instead of the massi

      • by llmc ( 928250 )
        oops. posted anonymously
    • By "increased security", they seem to mean that its less likely the whole browser with all your tabs and data is going to suddenly crash. Which can be useful, for example you might be in the middle of filling out an intense job application form on one tab and suddenly the youtube video you were loading in the background takes down your whole session.

      But I think calling it "security" just makes everyone assume they mean it will be harder for people to hack their system.

      Surely if it takes 16MB of RAM just to
  • Trident? (Score:3, Insightful)

    by Dotren ( 1449427 ) on Friday July 10, 2009 @02:05PM (#28653509)

    It's largely a .NET application that uses Internet Explorer's "Trident" rendering engine.

    Granted, it has made significant improvements but I still haven't been that impressed by the Trident engine. Sometimes I wish they'd use someone else's engine so that they'd be kept up-to-date on standards AND you'd have the same browsing experience on multiple browsers.

    I guess I can understand why they don't though... they'd be up a creek without a paddle if they used Webkit and people stopped developing for it or licensed Gecko from Firefox and they went under or yanked Microsoft's license.

    • I don't see how either of these are true.

      Licensing Gecko -- is it out of the question for them to just go open source? And certainly Webkit proves you're wrong, by its very existence -- Apple adopted KHTML and started developing for it. As long as Apple continues shipping Safari, I don't see development stopping, but even in some imaginary world where Apple stopped caring, Microsoft could simply adopt the code themselves.

      There is no technical or real business reason not to do it that way.

      The real reason is

    • they'd be up a creek without a paddle if they used Webkit and people stopped developing for it or licensed Gecko from Firefox and they went under or yanked Microsoft's license.

      Well for one thing, if it adheres to standards, there shouldn't have to be that much concerns about whether people "stop developing for" a particular rendering engine. Also, can Mozilla yank someone's license for Gecko? Isn't it open source? Why should people stop using Webkit?

      There are a few real reasons that Microsoft isn't using one of the existing open source rendering engines:

      1. it would be acknowledging the validity and value of open source software
      2. having a dominant but incompatible browser has
    • Re: (Score:3, Interesting)

      by BitZtream ( 692029 )

      Wow, WTF do you think open source is for? You've managed to imply that the two most important advantages to open source don't exist for two very large open source projects.

      Gecko is open source. They can't yank the license out from under you any more than they can from Webkit because the OSS license implies that you can continue to use it forever.

      Second, Webkit, like Gecko can stop development right this instant and they won't be any worse off than using Trident. They'll just have to do the webkit or gec

      • by Dotren ( 1449427 )

        Wow, WTF do you think open source is for?

        The two items you're thinking of in my original post were just some rather bad assumptions on my part due mostly to being in a hurry when I was thinking about this and looking the info up.

        When I looked up Gecko today I saw that it was developed by Mozilla and I assumed it was proprietary for their browser. I completed missed the following which is a few paragraphs down:

        Its development is now overseen by the Mozilla Foundation. Licensed by a tri-license of the Mozilla Public License (MPL), GNU General Public License (GPL) and GNU Lesser General Public License (LGPL), Gecko is free and open source software.

        My bad assumption on Webkit was due to forgetting that there are already other companies contributing back to it and that, therefore, SOMEO

  • by MCSEBear ( 907831 ) on Friday July 10, 2009 @02:07PM (#28653545)
    Ummm... Isn't a Gazelle kind of a fast animal?

    Since this browser runs at half the speed of the not exactly quick IE 7, shouldn't it be given a code name more in keeping with it's actual speed? I've always thought Ubuntu had a cute naming scheme going. I hereby dub this software Turgid Tortoise
  • Not new (Score:4, Insightful)

    by BitZtream ( 692029 ) on Friday July 10, 2009 @02:08PM (#28653555)

    Unless if by new you mean:
    From february at least, seems older to me: http://research.microsoft.com/apps/pubs/default.aspx?id=79655 [microsoft.com]

    Has already appeared on slashdot and a hundred other tech sites.

    http://tech.slashdot.org/article.pl?sid=09/02/22/1724244 [slashdot.org]

    Its hard to google before you run to try and get a story submitted isn't it?

    • Oh, even better.

      This isn't a article from Microsoft. Its an article from another news aggregator, like slashdot about a paper published by microsoft.

      Theres something fundementally wrong when your stories consist of links to other people talking about stories they didn't even write.

      Ars and slashdot are gonna get together and just circlejerk each other into their own little world.

  • Sounds neat. I shall get it immediately.
    Well, right after I get Windows 7, which will be after I get Vista, which will be after i get XP which will be after I decide microsoft have done anything worthwhile after Windows 2000.

  • Moving away from the DOM is ultimately going to confound searching engines and the namespaces they index. It would be nice to see Javascript running inside a VM - CLR or JVM, followed by other languages with the same access that Javascript has. Making Javascript a language under .NET in IE, perhaps using the Java VM for Javascript in Firefox etc. This may lend itself more readily to an indexable semantic web in the future.
    • Why?

      The DOM is a programming technique. DOM is independent of HTML and HTML doesn't require DOM. DOM is how an app interacts and deals with HTML in general in its own memory.

      SAX is another common alternative.

      Then you can do what companies like Google, Yahoo! and MS do right now which is completely custom and has very little in common with a DOM implementation because a DOM implementation doesn't work on the scale they deal with.

      • I'm referring to programmatic interaction with the browsers DOM interface once the document is loaded.

        Why? Primarily because Javascript is an awful language. As the emergence of GWT and countless other web 2.0 Javascript API's indicate, there is a lot of work being carried out in Javascript to bring more interactive UI to web pages. There are a number of advantages that introducing other languages thru a .NET or Java VM could bring:

        - Javascript is not a language that scales well to larger teams and pr
  • Well, we were so eager to get rid of segments that by the time 80386 more or less perfected them, we dumped them for flat mode. Now they are gone in x86-64, likely never to return. What a terrible mistake! If we had different segments, we could have a lightweight browser process with user space threads assigning segments to different domains on the page. Instead of trying to get protection by wrapping software sandboxes around everything like Java, C# or something else does, we could have the CPU actually doing it. If only I could go back in time and say to myself, as I fumed over the likes of ES:CX... and say, no no, this will actually turn about to be a good thing in the future!

    • Re: (Score:2, Informative)

      by BitZtream ( 692029 )

      Uhm ... segmenting didn't sandbox shit. It just made it annoying to get in between, not impossible as shown by the many different libraries that help programmers do exactly that.

      The Virtual Memory Manager support built into processors on the other hand DOES segment blocks of ram. This is why kernel space can be protected from random attacks in user space.

      Perhaps an OS that takes more advantage of the VMU would accomplish what you want, but jumping back to segmented addressing just means that the hackers

      • by tjstork ( 137384 )

        Uhm ... segmenting didn't sandbox shit. It just made it annoying to get in between, not impossible as shown by the many different libraries that help programmers do exactly tha

        In 16 bit, it surely didn't, but you could set up a 386 to have a segmented memory model where each segment within a process got its own permissions. So basically, within a process, you could make sure that data from two different pointers, aka segments, would not overwrite each other.

        • Re: (Score:2, Informative)

          by ergo98 ( 9391 )

          I think you're confusing concepts. Segmented memory was a hack, and protected nothing. Then they added protected mode, giving OS' the option of acting as the cop of memory. That has been on the x86 since the 286, and is of course widely used.

          Everything that any process on your machine does in user-space has to be effectively "allowed" by the operating system. It is purely due to non-granular permission structures that modern OS' don't allow you to fine-tune every permission of even "native" executables.

          • by tjstork ( 137384 )

            confusing concepts..

            No, I'm just looking at it from an instruction set point of view. In 16 bit world, ES,DS and the other segment registers were just annoyances that you had to deal with in order to get at data over 64k in length. In C you could choose a couple of ways to handle this - there was the small and tiny memory models which locked down the segments to be the start of the program and data, the medium and large, which lets you change the segments occasionally, and then the huge, which gave you an

            • I remember those days. I was tasked with updating our assembler libraries to take advantage of the 386. We had the FS and GS registers, and assembler commands such as LFS - Load Pointer Using FS, LGS - Load Pointer Using GS to simplify the loading of far pointers from the stack. It was very tedious at the time, and I think slower to access memory, taking half a dozen clock cycles. However, I think you have an interesting idea. Back in those days security didn't take on the same dimensions as it does today.
  • They might have better luck reducing the "resource overhead" if they programmed the kernel in something other than C#.
  • How long have we had browsers? How obvious is all of this? This reads like an article from 1997.

    We are still debating the best way to multi-thread a browser!

    One might have thought that we would be a little further along with this kind of stuff.

  • The UNIX designers were adamant about process-based isolation, to the degree that UNIX for a long time did not have kernel threads (early versions of Java had to emulate threads even in the mid-90's). Macintosh and Windows were much quicker to adopt threads.

    It's kind of ironic that Microsoft now is pushing for process-based isolation. Their window system, of course, has also moved to a client/server architecture and asynchronous calls now, like X11 already did 20 years ago. Why don't they just go all the

  • I guess thats an advanced concept for application programing in the world of Windows where fork() is still an alien concept.

    Excuse me while I remain unimpressed. I'll wait a while longer while MS familiarise themselves with common programming techniques developed in the 1970s.

  • A research team led by Microsoft's Helen Wang discovered how to rebrand and skin a theme for the widely used Firefox. "Since most people already think we stole Firefox code to render webpages correctly with IE 8, we got to thinking..." Helen Wang stated in a call yesterday. "Our next IE release will be simply to rebrand and change the default skin of the latest Firefox web browser! So we can then say, finally, that Internet Explorer correctly renders web pages according to W3C published standards." This

  • Re: (Score:1, Troll)

    Comment removed based on user account deletion
    • Even if they ship a browser that is standards compliant and passes ACID 3 yesterday, it won't matter because we'll still have to deal with all of the corporations that refuse to upgrade past IE6 until 2075.
    • Comment removed based on user account deletion
  • Now Google came out with their free OS with browser MS is striking back. They will now retaliate with their own free OS and browser system. MS is so competitive, they can not loose, they will dominate the free market, if necessary by bribing their way in. You may have vacuum on your head, but we have more vacuum in our heads.
  • This sounds like the nice folks at Microsoft finally read the Google Chrome specs and decided to claim that they invented it - or to be more charitable, they're doing the same thing.

    This is business at usual for the Microsoft people - but the real world has changed while they weren't looking. That "internet" thing that Bill mis-predicted has allowed news and information to be disseminated at lightning speed. Those who deal in lies and deception no longer have the cloak of secrecy to hide behind.

    If Micros

  • It has been a while since I took the operating systems course in my undergraduate CS curriculum, but I seem to recall that there is a balance to be struck between too many and too few processes with regard to other related attributes such as security and stability. In some cases, notably in Sun Solaris, an attempt was made to capture more of the benefits of process isolation without the attendant overhead of full processes by introducing so called "lightweight processes" which provide more isolation benefit
  • the underlying concepts of operating system design...

    What does that mean? Is the default home page is a blue screen?

    I'm happy to slag microsofts sloppy lazy monopolistic crapware with the best of them; but microsoft research have produced some remarkable works. What is this crap? Research = repackaging with provocative, if meaningless, buzzwords? Was this from microsoft market research?

  • What about Project Gazelle? Now that's something truly revolutionary. http://www.projectgazelle.org/ [projectgazelle.org]
  • if they wanted to be competitive they'd call it

    The Hound

  • Microsoft Research Showcases New Browser Prototype, "Gazelle"

    Like how, they 'published a report', without actually producing anything like a prototype, in a Ars Technica article, the day after Google announces Chrome OS. If I didn't know any better I would suspect the whole excercise was designed to steal Google thunder.

Decaffeinated coffee? Just Say No.

Working...