Firefox Working to Fix Memory Leaks 555
Christopher Blanc writes "Many Mozilla community members, including both volunteers and Mozilla Corporation employees, have been
helping to reduce Firefox's memory usage and fix memory leak bugs lately. Hopefully, the result of this effort will be that Firefox 3 uses less memory than Firefox 2 did, especially after it has been used for several hours." Here's hoping. Frequent restarts of things on my computer make me furious. I can't imagine why anyone would tolerate such things.
but but (Score:5, Interesting)
Symmetry (Score:2, Interesting)
Re:And on three... (Score:5, Interesting)
Every time someone mentions memory issues, the responses are either that it's supposed to consume a gigabyte of ram so that it speeds up the back button or that "there is no memory issue".
Strange, now, that there are suddenly people paying attention to specifically attacking memory use issues that supposedly don't exist.
the solution is simple (Score:1, Interesting)
Glad the issue is getting some priority, but .... (Score:3, Interesting)
I hope they won't totally forget the folks using older-specification systems, but I have my worries about the FF debugging process: I looked at the blog that was referenced in the article header, and some of the comments sound ominous for quality. The way some of them read, makes it seem as if patches to force memory-release in various situations are just going to be grafted on top of the buggy code. That looks like a recipe for performance loss, compared with the result of fixing the problems at their root?
-wb-
Re:but but (Score:5, Interesting)
I've heard that too. I use FF on my desktop at work with one or two plugins (FlashBlock and FireBug, mostly). It does leak memory after enough time. Closing the browser always fixes it, so it's not much of a problem.
That said, if a plugin leaks memory, there are a few options. First, the system should know. Even if the plugin in used constantly, I should be able to open the extensions options panel and see how much memory each one is using, so I can identify the culprit. There should be a warning system ("Plug-in 'MemHog2' is using 500MB of ram, close/ignore/disable?").
Also, when a plugin isn't in use, then it shouldn't cause a problem. Let's say that the problem is Flashblock. If it isn't actively rendering (say I only have one window/tab open and it's pure text, no flash/etc) then it really shouldn't be using any memory. If I have FireBug inactive it should use next to no memory (when I have it actively checked CSS/JS/etc I expect it to use memory).
I'm glad they are working on this. I've heard this complaint for a while. But even if the problem is the plugins, it needs fixing or roping in.
How about being able to set memory limits for plug-ins, Mac OS 1-9.x style? Maybe total, maybe per active page, maybe both. Just a random idea.
Re:C++ long-in-the-tooth? (Score:5, Interesting)
Firefox's problem is architectural and not one of garbage collection.
XUL is inherently single-threaded and JavaScript based. Try out any XUL application out there and you'll see how you get the same poor performance, speed and resource usage as with Firefox (try Miro Player and Joost).
The Firefox developers are literally throwing out more C code with every release, replacing it with JavaScript code.
Leaks (in the classical sense) aren't what's causing Firefox's abysmal performance, and this is why Firefox 2 performs worse than Firefox 1.5, despite one of the "features" of Firefox 2 was supposedly plenty of fixed memory leaks.
Actually I'm pretty sure they're in denial as to the cause of their problems. Announcing they're working on fixing "memory leaks" just supports their ability to continue their delusion.
Re:Firefox != Internet (Score:4, Interesting)
Firefox was supposed to be able to withstand popularity, unlike IE. Look at it now: people say it's slow, RAM hog, and hackers have started attacking it successfully just as much as IE.
At least we see it for what it is: the stick in Microsoft's eye that made them resume IE development.
FireFox == Internet (Score:3, Interesting)
Maybe not, but in the Windows World, Opera is not a viable alternative to many people who find the Opera UI to be excessively daunting for casual use.
The thing that has irritated me about this is that for a very long time, the FireFox leadership has insisted that there where no memory issues, that it was a specific type of use profile, and that if you knew the secrets of how to tweek the configuration file, performance would improve. This is the lamest of excuses.
FireFox is not sold as some kind of "leet" hacker browser, it's sold as a browser for the people. FireFox leadership needs to be more responsive to the feedback from "AVERAGE" users if they want FireFox to be a major player in the browser world. 10% is nice, but it's still only 10%.
More prevelant on Mac? (Score:3, Interesting)
Running "bare bones" on all Firefox installs, no plugins other than whatever may have been included with the base distribution.
Does anyone else notice this? I've switched back to Safari on the Mac in the meantime.
Re:C++ long-in-the-tooth? (Score:1, Interesting)
People keep saying this. I wonder if it's true -- I haven't seen any evidence of it. I've seen lazy/stupid programmers in all kinds of languages. Wouldn't the same argument work just as well against memory protection, and preemptive multitasking, and compilers?
and in some cases even prohibit the programmer from being able to manage their resources.
That's a problem with your specific GC, not a problem with GC. JWZ even ranted [jwz.org] about this. You hear Java users complaining about memory, and Java programmers bitching about the GC, but you never hear Lisp programmers bitching about their GC. Any production Lisp GC gives the programmer far more control than Java's does. Hearing people rant against GC is like hearing Yugo owners rant that automobiles suck.
C/C++ and similar languages, on the other hand, force the programmers to manage their resources.
No, they don't. How would they? If I forget to free something and it goes out of scope, bang, memory leak. How am I forced to do anything about that by the language? (I've heard more than one story about somebody who learned Java, and then went back to writing C++, and wrote many lines of code before remembering that their C++, while correct, leaked left-and-right.)
I want to be able to do more with the faster processors and additional RAM, rather than simply do the same job I could do yesterday only in "better" software.
You're right. The easiest route to fast, working software is to get it correct, and then tune for performance -- and GCs are great for that.
We also need to get back to writing applications that have good, if not great, performance with minimal resource requirements (e.g. RAM and processor). [...] But in either case it doesn't get done unless the programmers do their job, and use tools that allow them to do it.
Again, you're right. And these requirements just scream "we need a GC that doesn't suck".
java memory profile (Score:5, Interesting)
> etc. as they tend to have lazier programmers
Actually, it isn't lazier programmers. The problem is that existing garbage collection implementations have horrible memory profile.
If you look at the memory usage of a java program, it's about as bad as a c program that does nothing but leak memory. Practically speaking, java does little to free memory until it has *run out of memory*. Then when it does run out of memory and it needs to clean things up, things get slow as hell.
>The real answer is doing your job right, and using the right tool - which is not necessarily
>the easiest tool to use either.
Yes! Unfortunately, academics and many novice programmers (who just got finished being trained by academics) are unfamiliar with the powerful tools available like C++. Going to school can give you the mistaken impression that garbage collection is *a good thing* because everyone uses it there. The truth is that C++ is a very complicated language with a steep learning curve, but that many times it is simply the only tool that is suitable for the job.
If your program is IO bound, like a web application front end, you are in a great position, because essentially *any* tool will do the job, even if the performance is abysmal. You can use java, ruby, or whatever. And you should, becuase those languages don't present you with the complexity of c++.
Unfortunately, many programs *are not IO bound* and the performance and memory profile of the underlying tool are very important. This is most true of interactive non parializable programs. So, a good example would be bittorrent programs. Consider utorrent vs azureas, one in c++ and one in java. utorrent is fairly light weight and easy to use because of its performance characteristics. Azureus is a powerful and well engineered program, but it sure as hell is slow and chews up memory.
Another person talks about Mozilla "denial". (Score:5, Interesting)
See this +5 comment posted below: Firefox's problem is architectural and not one of garbage collection. [slashdot.org].
Quote: Actually I'm pretty sure they're in denial as to the cause of their problems. Announcing they're working on fixing "memory leaks" just supports their ability to continue their delusion. [my emphasis]
A +4 comment to that comment discusses Firefox's "four separate memory allocation schemes":
"1. Custom malloc/free implementation. (Yes, custom - not from libc.)
2. C++ new/delete operators, which for all I know may be overridden to use their malloc/free.
3. One of the first two with reference counting to decide when to free/delete.
4. JavaScript mark-and-sweep GC.
Dealing with this causes some truly insane hacks..."
Then read the comment they don't want you to see: The memory bug is also a CPU hogging bug. [slashdot.org] At present, it is marked -1 Flamebait. However, that comment begins to discuss apparent social problems at the Mozilla Foundation, and some of the same material has been marked +5 in the past.
Re:Mod parent up (Score:5, Interesting)
Uh... No. There are some arena allocators in use in the codebase for very specific tasks, but there is no custom malloc/free.
> They're C++ objects, exposed as JavaScript objects, using something that's like XPCOM but isn't
Actually, to be exposed to JS in Gecko something more or less has to be an XPCOM object at the moment. Then the XPConnect layer handles the glue between JS and C++.
> It makes a project that's supposed to be open source effectively closed off to only the
> "official" developers
As someone who got into this project without being in any way "official", I beg to differ!
Could Linux be the difference? (Score:2, Interesting)
I used to be a casual tester for both versions 1.5 and 2.0 and I switched to each well prior to their offical release. What I noticed was much more effort was expended creating satisfactory working versions on Windows over either the Mac or Linux. Nonetheless, I have been pleased with the results. My use of the coming version 3.0 has been very limited running of most of the stable alpha versions. So far my limited exposure seems to see an improvement over 2.0. Another factor that could make my experience worse, is that I use the Mozilla version that I install by hand. I have long ago not bothered to even update the supplied distribution version. Nonetheless, I have no complaints.
Another factor that should degrade my experience, is my machine's inners are not of that recent vitage having only one Gig of RAM and a much older AMD CPU. Might some of the problems so vociferously expressed here be due to other factors than so loudly proclaimed?
I am well aware that supposedly identical machines, with the same software can behave very differently. Had that experience with corporate property using NT and XP, where I could get to Unix and the version control system while a neighbor could not. Nonetheless, I find it odd that my experience is so at odds with the many that have written.
Re:C++ long-in-the-tooth? (Score:1, Interesting)
No. A computer, by definition, computes. At one level, this consists of moving bytes around. At another, it consists of moving bits around. At another, it's electrons flowing. At another, it's symbol manipulation. I'm a programmer, and even I don't care about bytes, or electrons; if Firefox worked by black magic, I would not care.
If C++ is efficient, isn't assembly even more efficient? C++ isn't as good at dealing with individual bits as assembly. Won't somebody think of the bits?
But even Linux is written in C, rather than assembly, today, even though Linus is skilled at writing assembly language. Is it less efficient now because it has less assembly than Linux 0.01 did? No, it's more efficient because Linus and friends can deal in useful abstractions. If you tried to write Firefox in machine code, you'd never finish.
It sounds relatively safe to say "C++ yields superior performance and memory usage" because in one extreme case (say, an embedded system) it's true. You wouldn't use a GC there. But then, on a small enough system (say, 512 bytes of RAM) you wouldn't use a normal C library, either. There simply isn't room. Write your own. But this doesn't scale: on a gigabyte+ system, it's a net savings to use an existing (shared!) library for string manipulation, and it's a net savings to use an existing GC for memory allocation.
Concurrency and Responsiveness (Score:3, Interesting)
Although Firefox does have some issues with memory usage (and occasionally memory leaks), that doesn't seem the be the primary cause of usability issues.
In my personal experience with Firefox, I see two problems:
Both of these problems could be solved relatively easily with threads (in a number of different ways), but for some reason the Firefox developers have an irrational paranoia of anything that even vaguely resembles native concurrency. They say "the real problem is just response time, if we can respond fast enough in a single thread it's the same" - but then they never actually do it, and they definitely don't do anything that would let them recover from component crashes.
Re:C++ long-in-the-tooth? (Score:1, Interesting)
And how, pray tell, does one "cleanly" deallocate cyclic, self-referential, dynamically-typed data structures, such as the ones that naturally arise with Javascript and HTML?
With garbage collection, that's how. All possible solutions are isomorphic with garbage collection.
So the real choice is between (1) the GC-nonexpert writes their own GC to cover a subset of the data structures when they should be writing their app, or (2) to use a modern, solid, pre-written GC for the whole app.
I would also like to know how you plan to, using architecture alone, coalesce data to reduce memory fragmentation and return slack space to the system. Manually doing that to the heap is a nightmare verging on a total impossibility, whilst a good GC can do it automatically if you can afford the execution time. (And a really good GC could re-lay-out data w.r.t. cachelines to optimize performance on actual runtime data.)
Re:java memory profile (Score:3, Interesting)
utorrent is fairly light weight and easy to use because of its performance characteristics
I call bs. utorrent is not easy to use because of performance characteristics. In some ways I find it stupid to use. Why do I have to use a 2nd level context menu to set a specfic upload/download rate maximum for a specific item? Where are my keyboard shortcuts? utorrent only wins on usability by copying what already exists. It's not a bad strategy, but I'm not sure I've seen anything greatly innovative in terms of UI in utorrent.
Re:The memory bug is also a CPU hogging bug. (Score:3, Interesting)
I guess that maybe this is a little off topic, but I think it shows that there are certainly resource problems in the tab implementation. Someone suggested separate processes for tabs with different SLDs, and complete JS & Plugin environment clean when changing SLDs, which could practically eliminate many XSS problems I assume [note: I haven't done my research on that one].