Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Mozilla The Internet Software

Firefox Working to Fix Memory Leaks 555

Christopher Blanc writes "Many Mozilla community members, including both volunteers and Mozilla Corporation employees, have been helping to reduce Firefox's memory usage and fix memory leak bugs lately. Hopefully, the result of this effort will be that Firefox 3 uses less memory than Firefox 2 did, especially after it has been used for several hours." Here's hoping. Frequent restarts of things on my computer make me furious. I can't imagine why anyone would tolerate such things.
This discussion has been archived. No new comments can be posted.

Firefox Working to Fix Memory Leaks

Comments Filter:
  • about time! (Score:4, Insightful)

    by downix ( 84795 ) on Monday September 24, 2007 @12:32PM (#20730379) Homepage
    Too many apps nowadays just throw out RAM like it was yesterdays newspaper! It is sloppy coding, and I'm tired of having to put 2GB of RAM into a system just to surf the net nowadays.
  • Bloat in general (Score:5, Insightful)

    by pipatron ( 966506 ) <pipatron@gmail.com> on Monday September 24, 2007 @12:32PM (#20730385) Homepage
    I don't mind the memory. I have plenty of gigs even in my laptop. What I mind is the general slowness that I experience with Firefox, and that makes me use Opera on my laptop even though I would feel better using an open source browser.
  • by realdodgeman ( 1113225 ) on Monday September 24, 2007 @12:33PM (#20730407) Homepage

    Here's hoping. Frequent restarts of things on my computer make me furious. I can't imagine why anyone would tolerate such things.


    Nobody forces you to use Firefox. You can use Opera, Konqueror, links or IE, or any other browser out there...
  • by wwmedia ( 950346 ) on Monday September 24, 2007 @12:34PM (#20730411)
    too little too late some people i know have switched to alternatives like Opera or back to IE7 (both use substantially less resources on windows) due to all that ram hogging
  • by IndustrialComplex ( 975015 ) on Monday September 24, 2007 @12:38PM (#20730477)
    You know they messed up bigtime when people would opt to go back to IE.
  • by mgkimsal2 ( 200677 ) on Monday September 24, 2007 @12:39PM (#20730499) Homepage
    I can't imagine why anyone would tolerate such things.

    Well, my guess is that you *are* tolerating it, as are millions of others, simply because you're using it (either older versions of IE, or current versions of Firefox). Can't comment on IE7 cause I don't use it much, but IE6 rarely crashed for me. IE3-5.5, almost daily crashes.

    5 years ago people people would constantly belittle IE users because it had frequent crashes, and pointed to the 'superior' Mozilla suite. Today, FF has morphed in to something which can't be used, with plugins, for more than a couple days max without needing to be reset. I add the caveat in there about 'with plugins' because I'm not sure I know many people who run a bare-metal Firefox. Most people use one or more extensions. This has been a huge marketing push for FF - "It's lean! Only use what you need! Get rid of 'bloat' - package everything in extensions!"

    Putting things in extensions makes the base 'leaner' but has lead to a situation where there's no centralized testing for, or even acknowledgement of, memory leak bugs (and other bugs, but this is the obvious one). I still read comments from people who claim they never have leaks with FF (we'll see some on this thread no doubt). It's not that I don't believe them, but their usage patterns are likely different from mine. I have about 6 plugins that I love to use, and I like to keep my browser going. The idea that MSIE is more "stable" than FF for daily usage should remind people that resting on your laurels is not an option. What cut the mustard 5 years ago isn't gonna cut it any more.

  • Re:Here we go... (Score:5, Insightful)

    by savuporo ( 658486 ) on Monday September 24, 2007 @12:39PM (#20730501)
    There are very few "things that require a lot of memory", really. Most of the "things" you do in programming are tradeoffs, often between complexity of implementation, speed and memory requirements. There are usually off the shelf algorithms for each approach. Simplest solutions are often the most inefficient ones.
    There is no reason why a minimal web browser could not be implemented, utilizing something like ~100kb of memory, in fact, i have seen the code to one. However, it wont be a) fast b) portable c) full featured d) very easy to understand
  • by deftcoder ( 1090261 ) on Monday September 24, 2007 @12:41PM (#20730529)
    The culprit is poor programming.

    If you can't code without hand-holding tools like automatic garbage collection, perhaps you belong in a different profession!
  • An act of balance (Score:5, Insightful)

    by SplatMan_DK ( 1035528 ) * on Monday September 24, 2007 @12:49PM (#20730667) Homepage Journal
    I can certainly understand why people are tired of FF memory leaks. Being a FF user myself, with open browser windows and multiple tabs all through the day, I have seen what happens to FF after 4-5 hours of intense browsing. And don't even get me started on the PDF and Flash plugins!

    Some would argue that the problem is sloppy coding, or poor encapsulation (a typical OO programmers point of view). But please remember, that even though modern browsers are GUI apps, they are coded much like low-level server processes or protocol stacks. Low-level programming using languages like C and C++ gives you more control and better performance, but at the expense of nicer development features like garbage collection and encapsulation.

    Think about it. Would you accept a browser that rendered HTML flawlessly and with absolutely no memory leaks, but took more than a minute to render each page? I think not.

    It's an act of balance, and the problem is not _always_ "sloppy coding". It is the increasing complexity of these apps, combined with user demands which push the development towards low-level development languages. From a realistic point of view, any app. written in low-level C with as many lines of codes as FF, is bound to have bugs and leaks. (perhaps except code controlling nuclear reactors and NASA satellites, but then the price of each line of code is also somewhat different).

    We - the end users - are not without blame.

    - Jesper
  • by SilentChris ( 452960 ) on Monday September 24, 2007 @12:49PM (#20730685) Homepage

    If you can't code without hand-holding tools like automatic garbage collection, perhaps you belong in a different profession!


    Or perhaps they're too busy thinking about clearly-defined objects, robust interfaces, clean documentation and the "big picture" then to worry about moving individual bytes around.

    Likewise, I don't trust any artist using Flash today. They should clearly know how to code, in assembly, animation and transitions. Use of a timeline is for losers. The creative process should always be sacrificed for knowing the code inside out. /sarcasmoff
  • Reality check (Score:4, Insightful)

    by MMC Monster ( 602931 ) on Monday September 24, 2007 @12:53PM (#20730741)

    I can't imagine why anyone would tolerate such things.
    5 years ago people people would constantly belittle IE users because it had frequent crashes, and pointed to the 'superior' Mozilla suite. Today, FF has morphed in to something which can't be used, with plugins, for more than a couple days max without needing to be reset.
    Reality check: Most general users do not leave their browsers open for a couple days. Let alone a couple days max. In fact, I wager that most turn their computers off at the end of the day.

    No I don't have a source for my statement. But ask people you know who are not in the tech industry. The one outlier group is Mac users, who don't realize that closing a browser window doesn't take the program out of memory.
  • Perhaps the culprit is C++. Languages with auto-garbage-collection or are database-engine-based tend to clean up automatically or cache to disk more effectively.
    Actually, the big language culprits would be those with auto-garbage collection, etc. as they tend to have lazier programmers that don't "need" to manage their own resources, and in some cases even prohibit the programmer from being able to manage their resources.

    C/C++ and similar languages, on the other hand, force the programmers to manage their resources. In those cases, the programmers would likely be just not designing their programs well, or employing bad resource management. Yes, managing resources can be hard - one project I worked on had to go through several months of testing to get the resources properly managed, and even then some of the resources were still a little uncontrollable due to legacy code or Windows APIs, but overall the thing was pretty stable and any memory leaks were mostly due to Windows APIs.

    In either case, I can't tell you how many times I have heard (especially from Java programmers) something along the lines of the following: "RAM is cheap", "processors are getting faster", "computers will be ready for this when we deliver it", "hardware is cheaper than programmers"

    No offense, but to rely on hardware always being getting faster, or the cost of adding more RAM always being cheaper, etc. is a bad premise to rely on. Already with multi-core processors we are seeing slower processors being combined into a single processor get the equivalent processing power of a faster processor (e.g. two 1.8 Ghz cores rated equally to a single 3 Ghz core); thus the premise breaks down. Also, I want to be able to do more with the faster processors and additional RAM, rather than simply do the same job I could do yesterday only in "better" software.

    The real answer is doing your job right, and using the right tool - which is not necessarily the easiest tool to use either. We also need to get back to writing applications that have good, if not great, performance with minimal resource requirements (e.g. RAM and processor). If we're not going this at the API/library level - at the very least - then the programs and library/APIs that rely on that API/library level will have worse performance no matter what they do. But in either case it doesn't get done unless the programmers do their job, and use tools that allow them to do it.
  • by caerwyn ( 38056 ) on Monday September 24, 2007 @12:56PM (#20730797)
    Or perhaps they're too busy thinking about clearly-defined objects, robust interfaces, clean documentation and the "big picture" then to worry about moving individual bytes around.

    Actually, I'd say they're not busy enough- if they actually had been, proper memory management should simply fall into place on top of a clean architecture. If you're trying to shoehorn memory management back into something that didn't support it before, you're going to have issues- and this applies whether you're doing c/c++ style management, reference counting, or garbage collecting.
  • by Kristoph ( 242780 ) on Monday September 24, 2007 @01:00PM (#20730883)
    A computer, by definition, 'moves bytes around'. One might argue that this is the job of the computer (or language or VM or whatever) and not programmer but if you don't understand how / why and when the bytes are moved around then you are a poorer programmer for it.

    C++ yields superior performance and memory usage, than higher level languages, in the hands of a skilled C++ programmer and it can lead to bloatware in the hands of a novice.

    There is this old saying about blaming your tools for a poor job and it applies to software development too.

    ]{

  • by i7dude ( 473077 ) on Monday September 24, 2007 @01:09PM (#20731011)

    Your argument is nonsense. If what you said held true, large scale applications should be able to be written is assembler. Without high level tools it wouldn't be feasible to create applications the scale we do today.


    Wrong. What he is saying is that people who choose to use C/C++ for their applications should be competent enough to properly handle their own allocation and de-allocation of memory. If your abilities as a programmer preclude you from properly managing your application's memory then you need to look at alternatives that will take care of that for you.

    There are plenty of languages out there that offer things like garbage collection. Developers need to make better choices about which tools they use to meet their needs, and also understand their limitations and work within those parameters.

    dude.
  • Mod parent up (Score:5, Insightful)

    by Anonymous Coward on Monday September 24, 2007 @01:14PM (#20731111)
    The parent post is completely correct. The main problem with Firefox and Mozilla in general is the XUL architecture. It is single-threaded such that JavaScript cannot run in multiple threads. And I've tried. It can't be done, Firefox will actually crash. (You can try with XPCOM, but it will crash.) I asked, and the solution given to me by the Firefox community (which isn't necessarily the developers, mind you) was to use Java from JavaScript, which is a non-starter if you want to write a cross-platform extension. (Not because Java isn't crossplatform, but because you can't guarantee that Firefox will be installed with a JVM.)

    Firefox's problem is architectural and not one of garbage collection.
    Unfortunately part of their problem is garbage collection - it's due to their architecture, but they have at least four separate memory allocation schemes going:

    1. Custom malloc/free implementation. (Yes, custom - not from libc.)
    2. C++ new/delete operators, which for all I know may be overridden to use their malloc/free.
    3. One of the first two with reference counting to decide when to free/delete.
    4. JavaScript mark-and-sweep GC.

    Dealing with this causes some truly insane hacks, like the absolutely insane DOM C++/JavaScript implementation. (They're C++ objects, exposed as JavaScript objects, using something that's like XPCOM but isn't due to the overhead XPCOM imposes. I really don't understand it.)

    Ultimately, though, it's worse than all that. All this crap leaves the code completely opaque, and actively prevents contributors from contributing code without having to learn an insane amount of infrastructure decisions.

    It makes a project that's supposed to be open source effectively closed off to only the "official" developers: almost open source in name only.
  • by ultranova ( 717540 ) on Monday September 24, 2007 @01:17PM (#20731165)

    If you can't code without hand-holding tools like automatic garbage collection, perhaps you belong in a different profession!

    Statements like this are why I prefer Java programs to C ones. Mandatory bounds checking means that no macho idiot can turn it off, no matter how full of hubris he is. But even assuming a 100% perfect coder, does it really make sense to use his precious time to worry about memory management when the computer can do that automagically ?

  • by Hatta ( 162192 ) on Monday September 24, 2007 @01:26PM (#20731293) Journal
    Likewise, I don't trust any artist using Flash today.

    That's a good instinct. Never trust anyone using Flash.
  • by arth1 ( 260657 ) on Monday September 24, 2007 @01:26PM (#20731303) Homepage Journal

    My notion is that if you're finding yourself doing a lot of new/delete statements, it's about time that you considered using smart pointers instead.

    My notion is that if you find yourself doing a lot of new/delete statements, it's about time you considered using a programming language that gives you fine-grained and direct control.

    Why should you have to remember to deallocate memory every last time you outscope some object when you can have the object do it for itself?

    Because a routine can still be called that may access the object, so it can't be deallocated? A compiler has to err on the side of caution, but that's an err nonetheless.

    A human can know whether an object is truly never going to be called again, whether it's highly unlikely to be called again and so inexpensive to recreate that a special case recreating would be better, or whether it's not used right now, but so expensive to recreate that you want it to stick around anyhow. And other tidbits of information about the program and its IO and data that a compiler or interpreter just doesn't have.

    Garbage collection can make a mediocre program more robust, no doubt about it. But it comes at the price of bloat, and will never match the efficiency of a well designed program where the programmer is in full control.
  • by jellomizer ( 103300 ) * on Monday September 24, 2007 @01:44PM (#20731545)
    I am not sure where you think C++ Programmers are more careful the Other Higher language programmers. A lot of C++ Programmers are really not that good the only reason they use C++ because they had to take it part of their college requirements, thus being the only language they know. Problem with C++ is the fact programming memory is so intensive that most people will be willing to let it leak because of all the extra work it will take to clean it up. They program in C++ not because it is the best tool for the job but the one they know the best. I have seen cases where C++ Project get killed because they take way to long to write and more to debug, while Python, VB or Java Programmers seem to get the job done. Most projects are not Open Source, Most programming is actually just custom program for businesses so these programmers are paid by the hour.

    I Agree with you the Duel Cores are not equal to systems with twice the GigaHertz and the singlecore twice as fast system normally will out preform the application written. But that isn't about C++ Program or Java Programming, It is about Multi-Threaded programming. Parallel processing is a different form of programming that most programmers shy away from. But still the fact if you are paying a programmer $20 and hour and it takes them twice as long to get a 10% increase in speed it would be better off buying extra RAM then paying the programmer.

    Now if you are getting a boxed application that is different because the cost of application development programming is spread across all the people who buy it. So by Doubling the price of the Shrink Wrapped App. say from $80 to $160 and everyone gets a 10% increase in speed then it is worth it to put the extra in and get it more optimized, the degree of benefit will outweigh the costs.

    The thing that usually gets me like comments like this parent it assumes a completely Academic Computer Science approach to all problems. While real life requires making trade offs and sacrificing performance is often a good trade off to make because most of the time it is unnoticeable, most computers spend most of their time idle anyways, and most application are idle waiting for inputs. So in the once in a while heavy processing moment say in this case an HTML Render adding 1 second to the load in real life most people wont notice unless they are going back and forward button crazy. Or doing a batch rendering job. As for memory I am surprised that you didn't bring up the large quanity of 32bit systems still out there being sold as new only handling a max of 4 GB or RAM so for a large population RAM limits are an issue again.
  • by Anonymous Coward on Monday September 24, 2007 @01:51PM (#20731671)

    I can't tell you how many times I have heard (especially from Java programmers) something along the lines of the following: "RAM is cheap", "processors are getting faster", "computers will be ready for this when we deliver it", "hardware is cheaper than programmers"
    Depending on the context in which this is said, it can be a reasonable statement (even though these sounds suspiciously like somewhat fabricated quotes to further your argument).

    If this is said in the context of writing an application, desktop or otherwise, that is intended to be run by someone other than the programmer or his immediate organization, then it's a terrible approach to the problem. But if you're writing an app that is intended to run solely on your company's application servers, it's perfectly reasonable to say that developer time costs more than the number of app servers or extra ram that it would take to deal with the quick-and-dirty solution. When you have application servers behind a load balancer, you're basically able to add application servers as the need arises. There isn't the same need for tightly-coded, memory-efficient code that there is when you're code is being run by third parties.

    None of this is an excuse for sloppy programming that could easily be done better with not much more effort. But if the effort do it in a resource-efficient manner really is significantly more than it is to do it in a simpler fashion, then it's perfectly reasonable to pick the easier route.
  • by griffjon ( 14945 ) <.GriffJon. .at. .gmail.com.> on Monday September 24, 2007 @01:57PM (#20731763) Homepage Journal
    I use firefox on my slow, memory-starved laptop. Opera is faster, but I just can't live without adblock in the modern web age of big flashy annoying ads.

    That being said, I'd love a lighter main firefox branch that would run happily with less ram.
  • by Wolfier ( 94144 ) on Monday September 24, 2007 @02:04PM (#20731875)
    How about making at least the Javascript engine and the Flash plugin framework multi-threaded?

    IE has been lowering the CPU priority of Flash applets for years so if you have 100 Flash ads open, it won't bog down your browsing. On Firefox, try opening a couple of tabs in Yahoo and it basically grinds to a halt.

    It used to be that in NS4, I could see "nsplugin" process so I can renice that to achieve the same effect. On Firefox, it's not possible.

    And, if you happen to leave Gmail open, my CPU usage (lowly Sempron) will hike to 30% twice a minute. On IE the CPU usage stays low. I suspect it's due to a multi-threaded Javascript engine in which individual thread can be prioritized.

  • by Doctor Crumb ( 737936 ) on Monday September 24, 2007 @02:06PM (#20731921) Homepage
    All of the best programmers I know are lazy; it's the well-meaning hard-working ones who duplicate functionality, write large fragile functions to solve a single case of a generic problem, and who create difficult and obscure interfaces. Laziness encourages you to let the computer do those things that it is good at, encourages code re-use, and encourages using built-in features wherever possible.

    The only case where you should be managing your own memory is in embedded programming or high-performance applications. Since TFM is about a *web browser*, neither of those apply. It doesn't really matter to the average person whether firefox leaks memory or not; they care if it is able to correctly render their homepage. It's a smart move for FF to have concentrated on Getting It Working first, since that's the constraint that will actually determine their product's success or failure.
  • Re:Symmetry (Score:3, Insightful)

    by jimicus ( 737525 ) on Monday September 24, 2007 @02:27PM (#20732225)
    You jest (I assume), but it is actually quite possible to code like that. In fact, in a previous job of mine part of the internal coding guidelines said "Don't use malloc".

    There is a major difference though - the problem they were trying to solve didn't involve a user interface and didn't deal with data of undefined size - it was basically a large database app.

    Of course, under the hood the compiler has to allocate memory at some point for more or less everything - but it's something the compiler can worry about, not the developer.
  • by dgatwood ( 11270 ) on Monday September 24, 2007 @02:49PM (#20732643) Homepage Journal

    Precisely. A skilled craftsman does not blame the tools. A skilled craftsman does the job right, and if it cannot be done right with the tools at hand, he/she goes and gets tools that are appropriate for the job.

    Poor programming is possible in any language, and garbage-collected languages are no exception. I would also caution that garbage-collected languages tend to encourage more novice programmers because of the apparent ease of use (it isn't really easier). This results in a larger number of poorly-written apps by people who think they know how to write software. Taking away the need to explicitly manage memory just encourages lazy programmers who can always find something else to be sloppy about.

    As for garbage collection making this sort of thing magically go away, that simply isn't the case. Working around garbage collection with things like "soft references" is a disgusting hack and is actually far harder than simply doing explicit memory management in the first place. Anyone who says differently has never had to manage any complex data structures that reference each other in non-trivial ways. The alternative is to basically write your own code that explicitly walks the data structure, deleting circular references, etc. If you're going to that much trouble, you are doing just as much work as you would for explicit memory management, but without the performance benefits from actually being able to destroy the objects immediately, and thus garbage collection is just hurting performance without providing any real benefit.

    Basically, apart from the really trivial cases (most of which could be solved just as easily by simply creating a stack-local auto-destroy variant of malloc), garbage collection causes more problems than it solves. In my book, garbage collection in programming language ranks right up there with multiple inheritance as one of the worst ideas ever conceived.

  • by Anonymous Coward on Monday September 24, 2007 @02:55PM (#20732747)
    Amazing... Last week when the huge gaping security holes in Firefox were discussed, I was heavily modded down for stating that between that and the huge memory leaks in FF, there was absolutely no reason not to just use IE.

    The funny thing was... people claimed I was making stuff up about the memory leaks. I guess this one will get modded down from people denying FF has horrid security. And in next week's FF security article, they will deny the memory leaks. And so it goes.
  • I am not sure where you think C++ Programmers are more careful the Other Higher language programmers.
    It's not that they are more careful - but to write a successful large program in C or C++, you have to manage your resources. If you don't - then the OS will kill the process as you will SEG Fault/etc. So, unless you are only writing small programs - like for a typical high school or college level class project - you have to manage your memory. (Even then, you have to to some degree.)

    A lot of C++ Programmers are really not that good the only reason they use C++ because they had to take it part of their college requirements, thus being the only language they know.
    The only reason a lot of folks use Java is because they learned it in college. Doesn't make them good programmers. Some vo-tech schools only teach .Net - VB before that - and the same thing resulted. This can be said about nearly any language. The trick is in the student that is able to learn multiple languages and take steps beyond that - otherwise, they'll never be a really good programmer. (I usually recommend people to learn at least three languages - a C family language, a Basic family language, and a Modulus family language - as well as some scripting languages. This puts them in different enough programming environments, with languages that are different enough from each other that it really does force some learning that goes above and beyond a single language.)

    Problem with C++ is the fact programming memory is so intensive that most people will be willing to let it leak because of all the extra work it will take to clean it up.
    You can say that about any language really. The problems might change a little bit, but the base problems are still there. Java programs end up as memory hogs because you can't manage your memory - it's left to the GC - and many programmers write in Java because it is all they learned in college; and regardless of whether or not they want to lower the memory by managing it, Java doesn't provide the tools to do so.

    That may not be the case for all GC languages, but it is the case of what is perhaps the most prominent GC language - Java, which also has had the unfortunate aspect of being forced on a lot of companies because the programmers they could hire out of college didn't know any other language. (Sad reflection on the colleges and universities they came out of too - but that is also partly due to vo-tech colleges as well, where training is typically limited to the quick & dirty.)

    The thing that usually gets me like comments like this parent it assumes a completely Academic Computer Science approach to all problems. While real life requires making trade offs and sacrificing performance is often a good trade off to make because most of the time it is unnoticeable, most computers spend most of their time idle anyways, and most application are idle waiting for inputs.
    In no way was I assuming a quot;completely Academic Computer Science approach to all problems" - in fact, a lot of Academic Computer Science is strongly behind the GC languages and not teaching students how to manage resources.

    What I am talking about is doing proper engineering of software - proper design, architecture, and implementation - such that the resources are managed in the program appropriately. It really is not a hard job to do, and when done - it can speed up implementation and debugging as it all works towards it in the end.

    By doing the rag-tag job that Academic Computer Science pushes, debugging will be long and hellish, and will only add to cost over-run.

    Additionally, I am primarily pushing management of resources in this thread, including memory. When it comes to performance, then yes - you need to run a profiler and optimize the program accordingly and not spend all your time on optimizing every line of code throughout the entire program.
  • by Wavicle ( 181176 ) on Monday September 24, 2007 @03:37PM (#20733391)
    You realize, of course, you are seriously going against Slashdot groupthink here. Still, if I had mod points, I'd give you one. Macho programmers who think they are too awesomely skillful to screw up are the ones whose screwups always take me the most time to chase down.

    Frankly if you can't look at a problem and then pull out five or six languages from your tool belt and evaluate which will be the best for this job, then you are a bad programmer. Don't code in C++ that which could easily be done in Python. Don't code in Python that which could easily be done in Bash. If you don't have a compelling argument for using C, DON'T USE IT!

    Sometimes I think Java is awful for no other reason than companies tend to believe that using one language for other is a net gain. That has never been my experience except when your very best programmers aren't all that good either. If you insist that everything run on the Java runtime, use Jython and embed Python. A good example of multi-language gains can be seen with embedded Lua. There are many applications out there that use Lua "under the covers" so that things that do not have to be written in C++ aren't. This includes games (I believe WoW is one).
  • by mattgreen ( 701203 ) on Monday September 24, 2007 @03:55PM (#20733679)
    Have you personally audited every line of the Firefox codebase?
  • You know everything you said would be true, IF our job was to squeeze the last bit of performance from a machine. But it's not. Our job is to enable the user. A programmer focusing on low level bit shifts, memory storage, string management, etc, takes his eyes off the truly "big picture" and that's the domain you are focusing on and the "business" problems there in.

    Our users want results that provide them with real value, if performance is part of that value then so be it. Use the proper tools for the job.
    Our job is to provide them with programs that work and function as expected and within a user-perceivable performance speck. Users have a perception of performance - and if that perception does not match what they want, then your program is in trouble, however hard that perception may be to achieve.

    As another commentor [slashdot.org] points out, it is very true that performance does matter. Servers software must be very responsive and server admins care very much about the user perception of their server's performance, and the perceptive performance of the services they render.

    However, while more tolerable on the user's desktop - it is still very important. User's don't like sitting around waiting for an application to become ready to use, or for an application to play catch up. That costs time, and for businesses that costs money. Simple things such as managing resources can often reduce that wait period to something more tolerable that isn't so costly but only when the programmer deploys such tactics and uses a language (GC'd or not) that allows them to do so.

    Ever have a user complain that your program didn't respond fast enough? One good example for you - in one network I am on, there is no reverse-DNS lookups and some other network basics are not provided, this results in some programs not responding (both IE and FF) for up to 5 minutes if I accidentally type in a bad URL. This results in a user perceived performance issue with the application (my first thought, until I could show that the same problem did not exist on another network with the same machine) as the application locked up - user interface was not usable, and would not be repainted, etc. while the software waited. Now IE and FF could mitigate this by pushing that DNS lookup into a worker thread. While this is one example of an external issue affecting internals to a software program, the basis behind the issue - user perceived performance problem - extends quite far.

    Few other examples - How often have you waited on MS Outlook to catch up with you? How often have to waited for MS Word to compete a task - that you didn't start - so that you could continue working on the document? How often have you worked on a document and had the computer stop responding to you while it finishes something else? How often have you upgraded Windows or Office to a newer version and end up having to upgrade the PC even though you didn't do anything different, and were not using any new features?

    A Pentium 75 with 32 MB RAM should be sufficient to do basic email; however, try to do it with most programs these days and the system will feel like it was bogged down, even though you could do that and run Office, and more when it came out back around 1995.

    Performance is a big thing, and the perception of performance is even bigger.
  • by m50d ( 797211 ) on Monday September 24, 2007 @05:04PM (#20734693) Homepage Journal
    No professional computer programmer should be incapable of programming without automatic garbage collection, just as no aeroplane pilot should be incapable of flying without an autopilot. You shouldn't be doing it very often, but you absolutely should have the ability.
  • by plover ( 150551 ) * on Monday September 24, 2007 @05:28PM (#20734995) Homepage Journal

    Frankly if you can't look at a problem and then pull out five or six languages from your tool belt and evaluate which will be the best for this job, then you are a bad programmer.

    While I agree with this sentiment on principle, in practice this has proven to yield unworkable solutions. Different people bring different skillsets to the table. You may have a dozen developers who all have C++ in common, but to varying degrees. One may be more skilled in Perl, another in smalltalk, another in Python, and three more in Java. Divvy up the specs and tell each one to "Write your code using the best tool for the job." Then spend another year trying to integrate the pieces, and when they quit try to hire someone who can maintain the hydra.

    Picking a single language for a project (at least at the component level) is pretty much a requirement.

    Even though they try to hand-wave it away, this has been a big problem in the Microsoft .NET world. When it was introduced, Microsoft promised that all .NET languages were equal, first-class languages (my interpretation at the time was that C# programmers were instantly demoted to VB programmers :-( ) and that a developer could write in whichever .NET language they were most comfortable. But there are C# programmers and VB.NET programmers who don't really speak each other's language, even though they all compile down to the same MSIL. Trying to get them to maintain each others code leads to a lot of squabbling.

    It's easy to say "A good programmer can write in any of these languages" but in reality it's much harder to find a lot of good programmers that are both willing and able to competently do maintenance in all of the languages you might end up with.

  • by Ungrounded Lightning ( 62228 ) on Monday September 24, 2007 @05:47PM (#20735313) Journal
    Perhaps the culprit is C++. Languages with auto-garbage-collection or are database-engine-based tend to clean up automatically or cache to disk more effectively. C/C++ just seems to have so many low-level crevices to accidently mess up with.

    Like C, C++ gives you more than enough rope to hang yourself.

    C++ has four memory models for object instances:
    - Global/static: (permanent one-per-program instances).
    - Stack/locally scoped: (local variables of class type in functions/subroutines or limited scope (between curly-brackets) within them.
    - Member/class scoped: (non-static variables of class type which are members of another class.)
    - Heap/dynamic: (created with "new" and released with "delete")

    The programming model for management of dynamic memory is algorithmic: The programmer is expected to keep track of when objects are no longer reachable and free them.

    C++ provides enough hooks to construct reference-counting smartpointers that can delete dynamic instances when their refcounts go to zero. But reference counting isn't a general solution: A set of mutually-referencing objects can become disconnected. They can no longer be reached and should be freed. But each references another, so their reference counts are non-zero and they persist. This is why garbage collection requires full-blown forest-walks (or incremental partitions of them) to reliably avoid memory leaks.

    Unfortunately, C++ has a subtle bug in the specification (and standards) that prevents building a general garbage collection system within it (even with preprocessor assistance).

    The problem is that garbage collection is one of the ways that an external routine can (properly) call a virtual member function (or do the equivalent) on an object that is midway through its construction or destruction and absolutely must get the correct version of the function for the stage of construction in question.

    This occurs because an object can have pointers to heap-allocated ("dynamic") objects as variables at multiple levels of inheritance. To build in a garbage collector your objects need a way for the collector to identify their pointers and follow them. Anything that allocates memory may provoke it do to this as a side-effect, and routines called during construction (or destruction) may allocate memory, or call something (that calls something that calls something ...) which does so. If this occurs in the base class of a derived class with member variables which are pointers, the latter aren't initialized and contain leftover heap or stack junk. So the garbage collector can go awry and get totally lost.

    To avoid this you build heap-allocated objects (and those which point to them) so that they contain a virtual function that enumerates the pointers in its own level (and those more baseward) and override this as you proceed through the stages of construction, so the pointers at each level are enumerated only IF they have been initialized. (There are constructs other than garbage collection with similar issues, and for some of them the issues are also significant on destruction, as various levels are finalized and their virtual functions would no longer perform correctly.)

    C++ actually gets this correct during the execution of the objects' own constructors and destructors themselves. (Other OOP languages, such as Smalltalk and Objective C, don't. Smalltalk gives you the "subclass" version during the construction of all the levels of "superclasses" - thus breaking the debugging of the superclasses whenever you override a method that is used in a constructor. It gets away with garbage collection because it's built-in and handled separately.)

    Unfortunately, member variables of object type also have construction and destruction, which might provoke garbage collection (or what-have-you) as a side-effect. During the construction of members you're guaranteed to have times when other members are not yet initialized - which ma
  • by tknd ( 979052 ) on Monday September 24, 2007 @07:16PM (#20736361)

    I agree! Except I can't fully blame firefox. I also have a lot of blame on websites and the direction some sites have gone. Take for instance, slashdot and even more annoying digg. The weight of the website has gotten considerably higher for no good purpose other than to look better. I can't argue against looks. Looks sell. But on the other hand it hinders the general experience as websites keep adding more and more layers making the browser's job more and more complex. Websites and website developers are equally responsible for degradation of performance of the web.

    I'm not sure the problem or tension will ever be resolved either. Web developers always want to offer the next new fancy feature and browsers will always be one (or a few) step(s) behind implementing the next spec. The result is a tool pushed beyond its original intentions at the cost of performance.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...