Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Mozilla The Internet Upgrades IT

New Firefox Project Could Mean Multi-Processor Support 300

suraj.sun writes with this excerpt from Mozilla Links "Mozilla has started a new project to make Firefox split in several processes at a time: one running the main user interface (chrome), and another or several others running the web content in each tab. Like Chrome or Internet Explorer 8 which have implemented this behavior to some degree, the main benefit would be the increase of stability: a single tab crash would not take down the whole session with it, as well as performance improvements in multiprocessor systems that are progressively becoming the norm. The project, which lacks a catchy name like other Mozilla projects (like TaskFox, Ubiquity, or Chocolate Factory) is coordinated by long time Mozillian, Benjamin Smedberg; and also integrated by Joe Drew, Jason Duell, Ben Turner, and Boris Zbarsky in the core team. According to the loose roadmap published, a simple implementation that works with a single tab (not sessions support, no secure connections, either on Linux or Windows, probably not even based on Firefox) should be reached around mid-July."
This discussion has been archived. No new comments can be posted.

New Firefox Project Could Mean Multi-Processor Support

Comments Filter:
  • Why isn't everyone doing this?

    As chipmakers demo 64 or 128 [gizmodo.com] core chips, why aren't we coding and being trained in Erlang [wikipedia.org]? Why aren't schools teaching this as a mandatory class? Why aren't old applications being broken down and analyzed to multithread components that don't interact? Why isn't the compiler theory concentrating on how to automate this (if possible)?

    It's becoming obvious the number of cores is going to far outweigh the number of applications we'll be running five years from now (so you can't leave it up to the OS) so why isn't this a bigger concentration now in application development?

    I understand a lot of server side stuff can take advantage of this (in the nature of serving many clients at once) but it's only a matter of time before it's typical on the desktop.
    • by stillnotelf ( 1476907 ) on Thursday May 07, 2009 @05:16PM (#27867227)
      Splitting your application into threads mean you have to get them to communicate with each other. When's the last time you met a programmer who loved communicating? There's nobody else in Mom's basement to practice on!
      • Re: (Score:3, Interesting)

        by sopssa ( 1498795 )

        I wish Opera will catch up soon to this aswell. Its a great browser, but when it does crash on some page whole browser goes down. They have to soon, seeing all other major browsers have implemented it.

    • Two easy answers:

      On the serverside you just fire up a few more processes.
      On the clientside you rarely need the juice that multiple cores provide. Processor speed still keeps improving _per core_. In most cases it is simply not worth the effort yet.
      • by timeOday ( 582209 ) on Thursday May 07, 2009 @05:36PM (#27867577)

        No, your post and the one you replied to are off base, because firefox is already multithreaded:

        # ps -eLF | grep firefox
        user 23146 20837 23146 0 1 468 496 1 15:26 ? 00:00:00 /bin/sh -c firefox
        user 23147 23146 23147 4 6 43763 59000 0 15:26 ? 00:00:12 /usr/lib/firefox-3.0.7/firefox
        user 23147 23146 23149 0 6 43763 59000 0 15:26 ? 00:00:00 /usr/lib/firefox-3.0.7/firefox
        user 23147 23146 23150 0 6 43763 59000 0 15:26 ? 00:00:00 /usr/lib/firefox-3.0.7/firefox
        user 23147 23146 23154 0 6 43763 59000 1 15:26 ? 00:00:00 /usr/lib/firefox-3.0.7/firefox
        user 23147 23146 23155 0 6 43763 59000 1 15:26 ? 00:00:00 /usr/lib/firefox-3.0.7/firefox
        user 23147 23146 23156 0 6 43763 59000 0 15:26 ? 00:00:02 /usr/lib/firefox-3.0.7/firefox

        And when I tried it just now, opening a new tab spawned a new thread (maybe more than one).

        The question for this article is, why separate processes instead of threads? If you have processes sharing memory (especially read/write memory) this distinction between threading vs. multiple processes becomes rather small.

        I do hope they can firefox survive a plugin crash, because youtube always locks up firefox eventually.

        • by idontgno ( 624372 ) on Thursday May 07, 2009 @06:00PM (#27867989) Journal

          Well, at least in the Unix/Linux model, processes are mostly independent, memory-wise. Shared memory is an explicit thing, under the category of Interprocess Communications (IPC). Under no condition does a fandango-on-core in one user process trash non-shared core in another process, and shared memory is generally restricted to shared-context communications, so both a smaller victim space and functionally more resilient. (Code using IPC shmem expects it to be volatile, and well-written code that uses IPC shmem vets its contents carefully before using it, so catastrophic oopses should be rare.)

          Compare that to the more modern thread model, which, in almost every architecture I'm aware of, mostly runs in exactly the same user space. If a thread eats atomic hot buffalo wings, all its brother threads in the same process get the same heartburn. The upside, barring badness, is that thread management is lightweight: no need to copy the parent memory image to a separate allocation and set up full process "OS bureaucracy" data structures. In contrast, it's practically "wave your magic wand et voila you have created a new thread". Very responsive. Very fragile.

          I think this responsiveness is a lot of the reason to love threads. And that "crashing" stuff? That never happens to me. So I don't need to worry about how fragile threads are.

          ObDisclaimer: it's been a few years since I've done any hardcore coding, so I may have missed some important details. If I did, I'm sure someone vastly smarter than me will be happy to point it out.

          • by setagllib ( 753300 ) on Thursday May 07, 2009 @06:10PM (#27868171)

            There's a much more important reason to use threads instead of processes + IPC, and that's that inter-thread communication is a sub-microsecond matter. Even the context switch between multiple threads (in the same process) is so cheap you can have way too many threads and still not see the overhead if you're also doing real work. In Linux much of inter-thread communication happens entirely in userland, so you don't even suffer the cost of a system call. You can go even further and use atomic operations to make data structures and algorithms that never need system calls to begin with, and that's about as fast as you can get with threading.

            • by Chirs ( 87576 ) on Thursday May 07, 2009 @06:50PM (#27868989)

              I think you are a bit confused.

              With linux, the only difference between context-switching between threads and between processes is the update of the page tables and the flushing of the TLB. Not normally a big deal.

              Also, I'm not sure where you get the idea that interthread communication happens in userland--threads share memory, file descriptors, signal handlers, etc., but things like sockets/pipes need to go through the kernel. Processes can be made to share memory too, it's just a bit more work to set up, and you need to be explicit as to exactly what is being shared. (Which can be an advantage.)

              Perhaps you're thinking about synchronization primitives which do not require a syscall in the uncontended case--if so, those are valid to use between processes as well.

              Multithreaded apps have the potential to be faster than multi-process ones due to the lack of TLB flush, but they're more fragile due to the shared memory. For something like a browser which is often prone to crashing on crappy plugins, it makes sense to aim for reliability.

          • It's a very strange trend to me.

            Tab processes must have some way to access global data and state. A shared memory approach is quite likely. So now, instead of a tab crash directly bringing down others, you just hope that nothing scary happens to the shared memory area. You also hope that your "crash" isn't some other failure like a deadlock - suddenly everything else hangs trying to get the mutex for the global bits? What if a plugin gets exploited in just one tab? Then the exploit code can use its unsandbo

          • The unix fork model is nowhere near as expensive as you think - it predominantly only creates process-unix OS datastructures - it will rely on shared code pages already loaded in memory versus creating new copies.
    • by Chabo ( 880571 ) on Thursday May 07, 2009 @05:17PM (#27867267) Homepage Journal

      As a recent college grad, I had one course in which threads came into play; it was in the course that introduced GUI work, so our GUI wouldn't freeze while a worker thread was running, but that is the area where single-threading is most apparent to the user, after all.

      There isn't all that much room for undergrads to take courses on threading though; the course I took it in was the highest-level course that's required of all CS majors, and even still, that was only one semester after taking our "Intro to C and Assembly" course.

      Realistically, an in-depth course on good threading implementation is at the graduate level, but there isn't a large percentage of CS majors that go on to graduate work.

    • by I'mTheEvilTwin ( 1544645 ) on Thursday May 07, 2009 @05:18PM (#27867279)

      It's becoming obvious the number of cores is going to far outweigh the number of applications we'll be running five years from now

      The number of cores (at least some chips) already outweighs the applications you can run if you run Winows 7.

    • by MrMr ( 219533 )
      As another developer I have to ask: Where did you learn to program? Because this is standard part of any curriculum I know of since the early 1980's.
      • Re: (Score:3, Interesting)

        by Tony Hoyle ( 11698 ) *

        Really? Not that I noticed. I was tought Pascal, Ada, 68000 machine code, and they let us play with a little C off the record. Oh and Cobol, of course. No threading at all. That was around 1990.

        Having talked to programmers who qualified more recently, it hasn't got any better except they now get to learn C 'officially'. It takes around 6-9 months for a new programmer to pick up how things are done in the real world after being through the education system.

        • Re: (Score:3, Interesting)

          by MrMr ( 219533 )
          Yes really, I did a Minor in CS in 1988, and we had to write our own semaphore based threading code, on a bunch of 3b2's connected by 10base2.
          I'm pretty sure that was plain textbook stuff (from the first chapter of Tanenbaum's Operating Systems), as you would fail the grade if you didn't get it to work.
    • by k.a.f. ( 168896 ) on Thursday May 07, 2009 @05:26PM (#27867451)

      Why isn't everyone doing this?

      Because multi-threaded programming is really really hard to get right, and because most programs either are not CPU bound, or else have so much inherently non-parallel logic that the benefit would be marginal. Serving multiple independent tabs in a web browser is extremely amenable to parallelization, but almost everything else isn't.

      • It is not hard to to get right, when you leave side-effects out of the language. Because of the determinism and independence from other parts of the program, you can easily split the code processing, and even cache results wherever it helps. Automatically. (Of course you can still manually control it.)

      • Re: (Score:3, Insightful)

        by Sloppy ( 14984 )

        Because multi-threaded programming is really really hard to get right

        In "why isn't everybody doing this?" the "this" refers to not doing multi-threaded programming; it means forking a process and talking over a pipe (or some other much-high-level-than-shared-memory IPC), which is actually pretty easy to do and hard to fuck up.

        The catch is, it tends to be harder to think up ways to split your program into multiple processes that can really be useful with such relatively limited IPC (relatively limited, as

    • Re: (Score:2, Informative)

      by Anonymous Coward

      As chipmakers demo 64 or 128 core chips, why aren't we coding and being trained in Erlang?

      Every mainstream programming language has facilities for multithread programming and there's no need to learn a new one just to do it.

      Why aren't schools teaching this as a mandatory class?

      Multiprocessing is a key theme of operating systems courses, which are in the core curriculum of all CS programs. Many other courses also cover synchronization primitives, IPC, and other topics useful for multithreaded programming.

      Why aren't old applications being broken down and analyzed to multithread components that don't interact?

      It's usually difficult to retroactively add such features to applications that weren't originally designed with them in mind.

      Why isn't the compiler theory concentrating on how to automate this (if possible)?

      Compilers do paralle

      • Multiprocessing is a key theme of operating systems courses, which are in the core curriculum of all CS programs. Many other courses also cover synchronization primitives, IPC, and other topics useful for multithreaded programming.

        If only it was.. life would be so much simpler.

        In truth 'operating systems' often means little more than learning machine code. Schools teach the bare minimum the pass - it's not worth their while to start on non-core subjects. The average graduate I see can't even *spell* IPC l

      • Why isn't the compiler theory concentrating on how to automate this (if possible)?

        Compilers do parallelize when possible. It's usually not possible without intervention by the programmer.

        I think he's wondering why more effort isn't being spent on getting the compiler to do it more intelligently. I don't know whether or not that is happening, I'm not in the industry.

    • Because it's hard?

    • by Anonymous Coward on Thursday May 07, 2009 @05:31PM (#27867515)

      Erlang is a very poor choice for true multi-threaded programming. It does "lightweight" threads very nicely but real multi-CPU stuff is very slow. To the point that it negates using multiple processors in the first place.

      While I like programming in Erlang, its performance sucks donkey balls. Even the HIPE stuff is pretty damn slow.

      Plus the learning curve for functional languages is pretty high. Most programmers take a good bit of training to "get it", if they ever do. I have been programming in Erlang for about 5 years and even though I get it, I still prefer the "normal" programming languages like C/C++, Lua, Perl, whatever. I use functional tricks and I wish some of those imperative languages had more functional features but I think they work more like the human mind does and that helps me program better.

      We do need something to make multiple-CPU programming easier though. Threaded programming in C/C++ or similar can turn into a nightmare real quick, it's error prone and complicated.

      • Wouldn't the point of the course be to teach them about threading (perhaps using erlang), they can then use those skills to thread in other languages.

    • by nxtw ( 866177 )

      Why aren't schools teaching this as a mandatory class?

      Because it is a niche language.

      Why aren't old applications being broken down and analyzed to multithread components that don't interact?

      Many programs already use multiple threads and were threaded before multi-core systems were common. The modern Windows shell (explorer.exe) has been multithreaded since it was introduced in Windows 95.

      In general, anything obvious and easy to parallelize has probably been already.

      Why isn't the compiler theory concentrati

    • by CajunArson ( 465943 ) on Thursday May 07, 2009 @06:01PM (#27868007) Journal

      Erlang's great until the share-nothing approach leads to so much overhead in pushing bytes back and forth between processes that you are spending more time copying bytes than actually doing work. Not saying that normal thread models are always better, but there is no "perfect" multiprocessing model and Erlang has its own pitfalls. As for Firefox, you are basically running a series of stovepipes where it makes sense for each tab to have a separate process... why it has taken so freakin' long for this I don't know, but it's not a new idea (hell I posted it right here on Slashdot back when FF3 was just coming out... lemme check.... here [slashdot.org].

    • Why do you think this isn't the case? There's plenty of research in exactly the areas you describe going on right now. It's all over the programming blogs and research papers, everywhere I look (when searching for programming-related topics) there is a tutorial on functional programming. Are you living under a rock or something?

    • As this article has pointed out, we don't need to switch to Erlang to take advantage of multicore chips. All we need to do is make better use of multithreaded programming, which has been around for ages. One of the tags on this article said "abouttime": this is exactly right. Why wouldn't a tabbed browser put each tab into a separate thread. I already have 25 open tabs on my current Firefox session, and it seems a little silly that those aren't in different threads or processes in this day and age.

      Right

    • by johanwanderer ( 1078391 ) on Thursday May 07, 2009 @06:35PM (#27868677)
      That's because most GUI applications are driven by events, and most applications are written to have just one event handler/dispatcher.

      That doesn't mean that the application doesn't have a ton of threads or processes, utilizing processor resources. It's just easier and more efficient for a single dispatcher to communicate with a bunch of threads than it is to communicate with a bunch of processes. It also means that when one thread catches the hiccup, the whole application has to deal with the collateral damages.

      Now, to make a GUI application multi-processes, you need to have a dedicated process to handle drawing and events. Add one or more processes to handle the tasks, and IPC to tie them together. In another word, you ends up reimplementing X :)

      Add a deadline to that and you can see why you end up with just multi-threaded applications.
    • Re: (Score:3, Insightful)

      by johannesg ( 664142 )

      For the most part we are not doing it because it is a totally useless activity. The vast majority of programs out there gets along fine with a single thread (or just a few threads for specific purposes). Adding more threads will not make them faster or better in any appreciable way.

      And thread creation / communication / synchronisation also has an overhead, and that overhead might very well add up to slower overal programs. Besides, if you are working and your computer just seems to stop for a second... That

  • responsiveness (Score:5, Insightful)

    by Lord Ender ( 156273 ) on Thursday May 07, 2009 @05:11PM (#27867129) Homepage

    I think the main benefit of such a system would be responsiveness. It is very unpleasant when one tab temporarily causes the entire browser window to become completely unresponsive--including the STOP button or the button to CLOSE the misbehaving tab. The UI should never freeze for any reason.

    • by anss123 ( 985305 )

      The UI should never freeze for any reason.

      Sadly, IE8 still has this problem. Anyone know for Chrome?

      • The UI should never freeze for any reason.

        Sadly, IE8 still has this problem. Anyone know for Chrome?

        Chrome is designed so that no blocking operations whatsoever are allowed on the UI thread. In theory, therefore, the interface should never freeze up. Since Linux builds still tend to crash a lot, though, I haven't been able to give it a good workout personally.

    • The UI should never freeze for any reason.

      Whoah. Somebody hasn't used a Windows OS in a while, I see...
      • The existence of MS Windows does not detract anything from his point. It only serves to demonstrate that Microsoft has not learned the lesson.

    • Re: (Score:3, Funny)

      by camperdave ( 969942 )
      It is very unpleasant when one tab temporarily causes the entire browser window to become completely unrespon...

      sive--including the STOP button or the button to CLO...

      SE the mi...

      sbehaving tab. The UI should never freeze for any reason.


      Hear! Hear! I don't recall any problems of this nature back on version 2.x of Firefox. And why does my bank's website think I'm running from a different computer every time there's a minor update to Firefox?
    • Re:responsiveness (Score:5, Insightful)

      by timeOday ( 582209 ) on Thursday May 07, 2009 @05:41PM (#27867655)
      Multi-processing aside, I wish firefox had an option to NOT use any CPU (including scripts, plugins, etc) on tabs except the one visible. I do NOT want 30 different processes, all firefox tabs, using up all my cores just to run spam animations. Granted, I DO usually want tabs to at least download in the background, so maybe it's harder than it sounds.
      • Re:responsiveness (Score:4, Insightful)

        by Darkness404 ( 1287218 ) on Thursday May 07, 2009 @06:17PM (#27868303)
        Sure, but there are a lot of problems with that for the general public. For example, a lot of people (including me) fire up YouTube, Pandora, or other web-based music services in another tab then listen to the music and then browse in different tabs. I also usually open up Facebook in another tab, and like the fact that if I get a message it alerts me with a sound so I can go back to the tab.

        Sure, it would be useful as an option, but I think this is more add-on territory because of how little it would benefit most people.
      • Re: (Score:3, Interesting)

        by coldmist ( 154493 )

        If I look at a page like The Drudge Report, I can ctrl-click on 10 links, creating 10 background tabs. Then, I click on the first article tab, read a bit, close the tab. That shows me the 2nd article. Close it, and I get the 3rd. etc.

        This way, I don't have to wait more than 50ms to go from article to article. They are already loaded in the background for me.

        Very handy!

        Doesn't everyone do this?

    • This can be achieved in threads, i really hate the idea of jumping to a one process per tab model when it doesn't offer the advantages being promised. If this is going to be done, it needs to be done for the security benefits and that requires OS/distro cooperation!

      Responsiveness / multicore use / tab crashing can all be done using threading
      Security is the only reason to use separate processes and IMO i don't want to take a per-tab performance hit when browsing slashdot/youtube/gay^H^H^Hporn/etc, per tab st

  • Finally! (Score:5, Interesting)

    by nausea_malvarma ( 1544887 ) on Thursday May 07, 2009 @05:11PM (#27867137)
    About time, mozilla. I've used firefox since it came out, and lately I've noticed it's not the hot-rod it once was. The web is changing - full of in-browser videos, web apps, and other resource intensive content, and firefox has had trouble catching up. I look forward to better speed and stability, assuming this project is seen through it's completion.

    Otherwise, I'd probably switch to google chrome eventually, which doesn't have the add-on support I enjoy from firefox.

    • The web is changing ...

      This is about hardware changing, not the web. If the CPU manufacturers were still concentrating on X Ghz chips instead of Y core chips, Mozilla wouldn't be doing this. Intel and AMD have spoken and the software world better pay attention.

      Mozilla is interested in providing a better user experience and they're correct in taking full advantage of your hardware. As multicore chips become cheaper and cheaper to fabricate and they show up in netbooks with low frequencies, this is going to pay off big time.

      • This is about hardware changing, not the web. If the CPU manufacturers were still concentrating on X Ghz chips instead of Y core chips, Mozilla wouldn't be doing this. Intel and AMD have spoken and the software world better pay attention.

        Not quite correct.

        The laws of physics have spoken, and caused Intel and AMD to change their focus from clockspeed/GHz to multicore. The software world better either pay attention, or figure out a way on their own to get single-core CPUs to run at 20 GHz without needing liqu

    • Re: (Score:2, Flamebait)

      The web is changing - full of in-browser videos, web apps, and other resource intensive content, and firefox has had trouble catching up.

      Of course, with add-ons to Firefox like Adblock Plus, FlashBlock and NoScript, all that crap becomes Opt-In. Aside from occasional problems with the Java plugin (which I need for a specific site), I've never felt that Firefox was slowing me down. Chrome felt slower despite handling JavaScript faster, because it had to run the JavaScript, period.

    • and lately I've noticed it's not the hot-rod it once was.

      And it doesn't crash like it used to back in the early 1.x days. Sure, it's a little bloated, but I'll happily compare my FF uptime stats with at least Windows Server.
    • Try out minefield, its pretty fast and rarely crashes on me (literally twice in ~6 months of running it), rendering is fast, startup time is pretty good and generally firefox 3.5 is the fasted browser I've seen (granted im on Linux but it compares favorably to chrome on my friends windows box, and the Linux version does even have PGO yet)

      Changing to a multiprocessor system is going to mean a performance hit and only provides a marginal security benefit, Firefox's main security hazard are its extensions, ofc

  • Relief (Score:3, Insightful)

    by elashish14 ( 1302231 ) <profcalc4 AT gmail DOT com> on Thursday May 07, 2009 @05:13PM (#27867159)
    This is great. I'm sick of that stupid integrated PDF viewer made by Adobe that always crashes my whole browser. Now it'll just crash a tiny bit.
    • So why are you still using it then? Why can't you use kpdf or document viewer that is built into Gnome?
    • You can set Firefox to open PDFs externally instead of using the plugin. Options -> Applications.

      That's what I did until last week, when I switched to Sumatra.

    • Re: (Score:3, Informative)

      by Iguanadon ( 1173453 )

      Tools->Options->Applications. Search for 'pdf' and disable it from using the plugin to auto downloading and opening the file.

      I also recommend using Foxit Reader instead of Acrobat for viewing PDFs, it too has a in browser plugin, but downloading and opening the application is quicker at least for me; the actual application usually opens in less than a second.

      But back on topic, I have been using chrome more and more lately due to the fact that no tab can crash the entire browser. I still use Firefox th

  • by MoOsEb0y ( 2177 ) on Thursday May 07, 2009 @05:15PM (#27867213)
    Does the process separation prevent badly-behaved plugins needed for a good portion of websites in existence these days *cough*flash*cough*acrobat*cough* from killing your browser when they inevitably decide to break? Both plugins have been killing me on both win32 and linux. Noscript and mozplugger or foxit help to some degree, but firefox is by far the most unstable program I use these days because of plugins.
    • it depends how its done, but the performance hit in the past has been pretty bad. This is how nspluginwrapper works and apparently running 32bit flash on a 32bit system still took a noticeable performance hit. IMO it's better to just do this in threads but keep a close eye on the plugin threads.

  • How about threads? (Score:2, Interesting)

    by node159 ( 636992 )

    Processes vs Threads...

    I'm pretty certain that the usual 40-60 pages I have open are going to blow the memory if each runs in its own process.

    • Re: (Score:2, Informative)

      by mishehu ( 712452 )
      And I thought that Firefox was already multithreaded, and thus already multi-processor supported... this would just simply be a different approach to the same scenario - how to split up the tasks over multiple cpus...

      or am I wrong about it being multithreaded?
    • by TheRaven64 ( 641858 ) on Thursday May 07, 2009 @05:36PM (#27867571) Journal
      No. Just no.

      On any modern system, there is very little memory overhead to having multiple copies of the same process. They will share read-only or copy-on-write versions of the executable code and resources loaded from shared libraries and the the program binary, as well as any resource files opened with mmap() or the Windows equivalent. The only real overhead is relocation symbols, which are a tiny fraction of most processes. In exchange for this small overhead, you have the huge benefit of having completely isolated instances which only communicate with each other through well-defined interfaces.

      Threads are an implementation trick. They should not be exposed as a programmer abstraction unless you want people to write terrible code. Go and learn Erlang for how parallel code should be written.

    • Re: (Score:3, Informative)

      by nxtw ( 866177 )

      Do mods know nothing abuot modern operating systems?

      A properly implemented application using a multi-process model should use only slightly more memory, thanks to shared memory [wikipedia.org], a feature of any modern operating system.

    • Re: (Score:3, Informative)

      by dltaylor ( 7510 )

      If it is threads, then the common parts are sharing literally the same memory, although you do pay for some locks.

      If processes, which would be more robust, then the common parts should be in .so/.dll to share the code (common data could be in library-allocated memory, but cleanup is tricky on M$-Windows), and per-instance data is part of the process, which, when a window (tab, too, I suppose, but I don't use them) is closed, would free the memory. Reducing the amount of common storage to simplify its manag

    • by RiotingPacifist ( 1228016 ) on Thursday May 07, 2009 @05:47PM (#27867765)

      I tried explaining this on DIGG, but to not have the title understand it on Slashdot is depressing!
      I think there is an advantage to processes pre tab against a code injection attack
      Also if you had Firefox-gui, Firefox-net, Firefox-Gecko, Firefox-Profile, Firefox-file you could give each one a different SElinux/apparmor/UAC profile.

      Im not sure what the performance trade off would be like so i sincerely hope that there is a single binary compiler option. I also think that a good balance to prevent the security hit on per-tab processes is to only put https tabs in separate processes (additionally it would be smart to prevent extensions running on these pages (GUI extensions would still work, but nothing that touched the page).

      Are processes even needed for security though? can threads be locked down to achieve this without the performance hit? (and additionally, lock down extensions?)

    • by jd ( 1658 )

      You think that's bad? A friend of mine routinely runs 400+ tabs. (She is forced to use Firefox 1.x as nothing newer is capable of handling this kind of number.) Can you imagine the resource hit that would take by using processes?

  • by MoFoQ ( 584566 ) on Thursday May 07, 2009 @05:17PM (#27867251)

    I guess it can be useful in determining which site I visit tends to create the memory leaks I still experience (even with ff3).
    (as I type, this current browser session has ballooned to over 600MB...which is still better than my typical with ff2...which was 700-800MB)

    maybe they can dedicate a process just for "garbage collection".

  • Catchy Name (Score:5, Funny)

    by tmmagee ( 1475877 ) on Thursday May 07, 2009 @05:17PM (#27867271)
    How about FireFork?
  • by faragon ( 789704 ) on Thursday May 07, 2009 @05:23PM (#27867393) Homepage
    ... is to surrender in order to accept buggy as hell plug-ins or memory leaks as "acceptable".

    Current multithreaded Firefox is able to use multiple CPUs, being the reason of splitting the tabs into independent processes is to surrender to mediocrity. How about increasing Q&A, do proper synchronization between components, and don't allow untested components to be used without showing a big warning at installation?
  • by Tumbleweed ( 3706 ) * on Thursday May 07, 2009 @05:26PM (#27867437)

    Will Chrome mature to have a nice system of plugins to match the advantages of Firefox before Firefox rearchitects this very low level code?

    I sometimes wonder about the FF devs - I've been wondering about the lack of a multi-threaded (at least) UI for a few years now. That project kept getting put off and put off until there was too much code to change easily. Only now that a real competitor comes along do they bother with the obvious thing that should've been put in from the start. Do FF devs not actually USE FF? Or do they not browse sites with Flash apps that go out of control and make the browser completely unresponsive? I find that hard to believe.

    Whatever. At least it'll finally happen. One wonders how many people will have switched over to Chrome by the time they get this out the door, though.

  • by darpo ( 5213 ) on Thursday May 07, 2009 @05:27PM (#27867461) Homepage
    They both have geek-cred, but Chrome people say Firefox is unstable, while Firefox people complain Chrome has no extensions. So it's a race between the two browsers: will Firefox get tab isolation before Chrome, or will Chrome get extension support before Firefox? Either way, we users win.
  • by Sowelu ( 713889 ) on Thursday May 07, 2009 @05:28PM (#27867477)
    The advantage of single-processor apps in a less-than-perfect OS, is that when the app decides to chomp up all the CPU that it can grab, it doesn't cripple your machine. Moving from one to two cores for me has meant that browsers can't suck down 100% of my CPU and prevent me from even closing them for minutes at a time. This had better not let Firefox use up 100% of my machine again.
  • For the last few years Google's strategy has been to make the browser the platform of choice. That would make the whole Windows, Linux, Mac, mobile whatever choice irrelevant.

    Making Firefox act more like a real operating system, each "application" runs in its own process is another step in that direction. It means that my gmail browser window won't crash if I surf to some buggy website. And it means that I can run a lot of browser based application faster and more stably.

    This is the next logical st
    • Also, separate processes provides more isolation, so the malware site I'm visiting has no avenue to get at the banking application in the next tab.

      However, web based apps require all browsers to render basic html properly. When is firefox going to fix Bug 33654? It was reported just over 9 years ago. This is the bug that prevents you from using TEXTAREA html elements in forms and getting them to be a consistent size. So far its been reported and marked as a duplicate of this bug at least 25 times.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...