Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Mozilla The Internet Java Programming

Firefox Gets Massive JavaScript Performance Boost 462

monkeymonkey writes "Mozilla has integrated tracing optimization into SpiderMonkey, the JavaScript interpreter in Firefox. This improvement has boosted JavaScript performance by a factor of 20 to 40 in certain contexts. Ars Technica interviewed Mozilla CTO Brendan Eich (the original creator of JavaScript) and Mozilla's vice president of engineering, Mike Shaver. They say that tracing optimization will 'take JavaScript performance into the next tier' and 'get people thinking about JavaScript as a more general-purpose language.' The eventual goal is to make JavaScript run as fast as C code. Ars reports: 'Mozilla is leveraging an impressive new optimization technique to bring a big performance boost to the Firefox JavaScript engine. ...They aim to improve execution speed so that it is comparable to that of native code. This will redefine the boundaries of client-side performance and enable the development of a whole new generation of more computationally-intensive web applications.' Mozilla has also published a video that demonstrates the performance difference." An anonymous reader contributes links the blogs of Eich and Shaver, where they have some further benchmarks.
This discussion has been archived. No new comments can be posted.

Firefox Gets Massive JavaScript Performance Boost

Comments Filter:
  • As fast as C code??? (Score:5, Interesting)

    by ACDChook ( 665413 ) on Friday August 22, 2008 @10:10PM (#24714765)
    Correct me if I'm wrong, but generally speaking, I was always under the impression that, as an interpreted language, javascript will never be able to run 100% as fast as natively compiled C code.
  • Its a question of what is "core framework" type stuff, and what is "actual application". Things like UI layout, interaction, networking, security, caching, rendering - as well as executing run time JS - is "core" functionality. I'd wager that >90% of binary code of Fx and, say, Thunderbird is the same.

    And most of that core stuff is written in C++. Well, actually, its written in an obscure dialect of C++, developed when Netscape ran on a dozen various platforms, with mutually incomparable cpp compilers.

    But that 10% that makes an application engine a web browser, or a mail client.. Most of that is written in Javascript. And most of it is "leaf" code, with not much cross calling, or dependencies that don't go through the underlying engine. Stuff that the just about total lack of "programming in the large" that Javascript has doesn't much matter.
  • Dr. Michael Franz (Score:5, Interesting)

    by tknd ( 979052 ) on Friday August 22, 2008 @10:56PM (#24715109)

    The theories behind tracing optimization were pioneered by Dr. Michael Franz and Dr. Andreas Gal, research scientists at the University of California, Irvine.

    Hey that's my old compilers professor and my school!

    This PDF [uci.edu] looks like the paper the article is referencing.

  • by MasterC ( 70492 ) <cmlburnett@gm[ ].com ['ail' in gap]> on Friday August 22, 2008 @11:01PM (#24715163) Homepage

    I've written my share of JS-heavy apps and the boost will be nice for that. However, my complaints with JS don't lie with performance.

    • Tied too much to the browser. JS works great for some (some love it) but syntactically I hate every last part of it. However: web == JS so I have no other option.
    • Typing. Yeah, it has types but they're practically worthless. A Number represents a float/double and an integer? Say what?
    • Type checking.
    • No reflection.
    • No dictionary. Sure, you can use an Object as a dictionary but the second someone prototypes it to add root functionality then you've introduced other items in your "dictionary". (I'm looking at you prototype.js)
    • Nothing resembling libraries. No dependencies, etc.
    • It's bastardized to accommodate the short comings of HTML (drop downs, combo boxes, etc.)
    • Obey's Postel's law [wikipedia.org] too much. Error handling and exceptions is a sad state.
    • No threading. No locking. Nothing resembling concurrent programming. The more complicated your app the more arbitrary events and multithreading are important.
    • No classes. Prototyping & cloning is a neat paradigm for where it fits but so do class-based objects. This isn't just JS I have this problem with. Being able to do both and using the right one where necessary would be great.
    • When is the document loaded? And if you have two libraries vying for that event? (See library complaint)
    • Since it has no real library support I have to blame the browser for not providing more general functionality. XML parsing, date stuff is abysmal, and other "routine" stuff you do when making web sites.
    • Scoping. Scoping is mind-numbingly bad.
    • Namespaces (again, see library complaint) are implemented via object nesting, which isn't really namespaces
    • Logging and debugging. I haven't delved into the likes of Firebug to see how it works but when the language (again no libraries so I blame the language) itself only provides alert() then it's clear the creators weren't thinking about debugging at all. At least IE natively will let you debug JS!
    • Standard dialogs are alert() and confirm(). Anything and everything else you have to roll your own. I really, really don't want to write something for a Yes/No dialog instead of OK/Cancel confirm().
    • Drag-and-drop. If you've done it then you know it's no walk in the park.
    • Browser identification and JS version identification. Why should I have to jump through hoops, poke & prod things, and guess at what my JS run-time is? Everyone has their own means to detect it and it's absolutely ludicrous. I'm fine if there's no real "standard" but at least give me the variables to know what I'm writing against so I can adequately work with it. (Again tied too close to the browser.) Every language I use frequently has means for me to identify such things.

    I think that's enough. I'm sure you could easily argue back but this is my rant about why this boost is not the saving grace to JavaScript.

    Basically my point is that performance does not bring JS up another tier. It just prolongs the pain of having a grossly inadequate language for rich application development. JS does have some nice things about it (first-class functions, closures, for(..in..), etc.) but in no way would I consider it "good" for application development.

    Step back and realize the movement is pushing applications into the browser. Yes, the same apps that currently use threading; the same apps that have more than 4 input widgets (input, select, radio, checkbox); the same apps that run slow even when written in native code; the same apps that depends on libraries of code; etc. JavaScript, as is, is not The Answer and this performance boost is just a Bluepill [wikipedia.org] in disguise.

  • by nmb3000 ( 741169 ) on Friday August 22, 2008 @11:20PM (#24715281) Journal

    Does that also mean it's NOT as fast as native C/C++ code because C# is blatantly not and thus is part of the marketing guff that you were gulliable to believe?

    .NET languages have a JIT compiler that is automatically invoked when the application is run. This compiles the program down to native binary code for the current architecture and operating system. The resulting binary is cached on the system so future executions can skip the compiling process. Additionally the compiler can be executed manually when the program is installed to prevent the first-run delay.

    That said, in general programs written using the .NET framework end up with the same basic overhead as a program written in C/C++. The biggest difference are all the .NET libraries and other managed features of the language. In this way C# would be about the same as C++ using a bunch of dynamically-linked 3rd party libraries and a completely autonomous garbage collection library.

    Obviously this is more overhead than a small and simple C program but that wasn't the point. I think it would be interesting if you could provide pre-compiled Javascript files to browsers for execution -- give it the bytecode directly and allow it to skip the compile process. This would make Javascript much more like Java in that regard.

  • by BZ ( 40346 ) on Friday August 22, 2008 @11:25PM (#24715309)

    For what it's worth tracemonkey is about the same speed as unoptimized C on integer math at this point. The difference is register allocation (which tracemonkey doesn't do yet).

    Moving to more complicated examples, things get more interesting. Since the jit has full runtime information, it can do optimizations that a AOT compiler cannot do. At the same time, the lack of a type system does hurt, as you point out. At the moment, tracemonkey handles this by doing type inference and then if it turns out to be wrong (e.g. the type changes) bailing out and falling back on the interpreter. Turns out, types don't change much.

    The real issue for a real-world Javascript program is that most of them touch the DOM too, not just execute JS. And right now tracemonkey doesn't speed that up at all. In fact, it can't jit parts of the code that touch the DOM. Eventually the hope is to add that ability.

  • by Anik315 ( 585913 ) <anik@alphaco r . n et> on Friday August 22, 2008 @11:27PM (#24715321)

    It would nice to see some demos of this with John Resig's Processing.js [ejohn.org]. It could definitely use the kind performance boost being discussed here.

    In addition to a performance considerations, it would also be nice to have addtional some additional bit depth in JavaScript.

    I anticipate JavaScript will continue to be very popular, but there are alot a lot of reasons other than performance that people won't want to use the language for writing desktop applications over C/C++/Java. That said, there have been alot of recent developments that have made me cautiously optimistic about the future of the language along these lines.

  • Afterall, Firefox developers probably aren't the most 1337 C/C++ coders out there, but they are probably amongst the best JavaScript ones.

    Whoa! Not so fast!

    The Javascript interpreter in Firefox is written in C, and related stuff (XPConnect, etc.) is written in C++. You should go read it some time; this stuff was definitely NOT written by mere mortals.

    You can browse the source at the Mozilla Developer Center; no link, so only the truly interested will go there. Look in mozilla/js/src.

  • by Anonymous Brave Guy ( 457657 ) on Friday August 22, 2008 @11:53PM (#24715487)

    Concurrency is another big win for interpreted (and to jit-ted code like Java) code. The compiler on the target machine gets to decide how to optimize the the code based on the number of processors.

    Which would be great, if only someone had invented algorithms that could take advantage of that in cases other than trivial parallelisation where the more cores the better. Unfortunately, understanding of how to do that is still in its infancy even in academia, which means that the combination of old fashioned compilation plus moderate run-time adjustments are still likely to blow away anything interpreted for a while to come, and JIT compilation is no big advantage yet either.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Friday August 22, 2008 @11:54PM (#24715497)
    Comment removed based on user account deletion
  • by eclectechie ( 411647 ) <mredivo@@@binarytool...com> on Saturday August 23, 2008 @12:20AM (#24715623) Homepage

    I think that's enough. I'm sure you could easily argue back but this is my rant about why this boost is not the saving grace to JavaScript.

    Darn right. Virtually everyone one of your complaints is either based on personal taste (classes, strict typing, etc), missing bits of framework (threading, logging), or the inability to differentiate the DOM from JS the language (load order issues, dialog complaints, etc). About the only legitimate complaints I was able to identify are:

    * lack of modules
    * lack of namespaces

    And both of these are well-known issues that were *supposed* to be addressed in JS4 (and hopefully will be addressed in the future).

    Spoken like someone who has not programmed in enough languages to understand the GP's points.

    For myself, the "Postel's Law" item has been the cause of a lot of grief, tied in a dead heat with the lack of strict typing. Grrrr....

  • by CoughDropAddict ( 40792 ) * on Saturday August 23, 2008 @12:22AM (#24715633) Homepage

    The catch is that you pay two penalties: startup time and memory. Lots of memory: for keeping stats on what needs compiling, trampolines to call in and out of the interpreter vs. JIT native code, and the native code *plus* the byte code.

    That JITs automatically incur large memory footprint or startup time penalties is the logical conclusion you come to if you look at the JVM. But the truth is that JITs don't have to suck as much as the JVM does.

    For example, take LuaJIT [luajit.org], a JIT for the already-speedy dynamic language Lua. It speeds up Lua roughly 2-5x while starting up in less than 0.01 CPU-seconds and introducing less than 20% memory overhead [debian.org]. It also takes 2-8x less memory and starts up 10x faster than the JVM [debian.org], despite the fact that Lua is compiling from source, whereas the JVM starts with bytecode.

    I've never looked at the source for the JVM, so I can't say just why it takes so many resources, but I can only conclude that it's because Sun just doesn't consider startup time or memory footprint a priority.

  • by ensignyu ( 417022 ) on Saturday August 23, 2008 @12:29AM (#24715677)

    So, really the memory access will be a bottle neck, you can never hope to have your program in cache and it will be much slower than C.

    That's not always a given. If we go by the old rule of thumb that 80% of the time is spent in 20% of the code, we could stick that 20% in one place to maximize cache usage. You can even optimize so that if branches that are taken are kept in the cache, and infrequently executed branches are moved out of the way, maybe in a separate page so they can be swapped to disk.

    You can do this to a certain degree at compile time, but often you don't know in advance what paths are going to be hot (it might be based on the data) and it may even change as the program runs.

    In practice, if someone tells you that Java is faster that C, they're speaking mostly in hypotheticals. Java and another high-level languages encourage so many layers of abstraction that the sheer amount of code that needs to run will probably make it slower than your typical C program. There's also a lot of things, particularly anything that needs to be dynamic, that you can't easily/efficiently compile.

    What's interesting is LLVM and .NET, where you can run C/C++ code in an interpreted/JIT-compiled environment. Potentially, with the optimizations mentioned above, you could have C code running in a virtual machine that's faster than statically-compiled C code.

  • by cbrocious ( 764766 ) on Saturday August 23, 2008 @01:09AM (#24715911) Homepage
    Aaaaaaactually, this is one place where such languages can shine. I don't know about the JS implementation in Firefox, but I know that MS.NET does memory allocation voodoo where an app domain will preallocate memory for objects. That way when you want to actually instantiate an object, you don't have the allocation step. In addition, when the GC runs and deletes objects, the memory gets put back into the pool. It leads to impressive performance for creation of a large number of objects.
  • by Anonymous Coward on Saturday August 23, 2008 @01:13AM (#24715945)

    It's true that for applications where a Javascript function takes a long time to execute, that translation overhead can be neglected, yielding comparable speed with C, however such functions are rare enough in ordinary web applications that the point is moot.

    They're becoming increasingly less rare as people write more complex applications for the web. There are Javascript libraries that do complex layouts not possible in CSS/HTML which dynamically resize when you drag and drop items around. Someone a Keynote workalike [280slides.com], which chugs badly on slower machines -- while you could probably blame a lot of it on DOM, the Javascript and the Objective-C-like abstraction layer and windowing library probably doesn't help.

    So it might be that it "only" takes 400 ms to update the layout after you drag a slider to resize a divider, but it'd be great if it only took 100 ms, especially on old hardware. If you only compile code that actually gets executed, the translation overhead might be less than you think. In particular, the method mentioned in the article uses a less expensive type of JIT than traditional JIT compilers.

  • by cbrocious ( 764766 ) on Saturday August 23, 2008 @01:15AM (#24715951) Homepage
    The use of more memory really has nothing to do with performance at all. It's that you've got a decent bit of overhead on every object and it adds up. In addition to the objects themselves being bigger (say, a string being represented as a refcount, a length, and then the data), you have the memory the GC uses. In the end, we pay for the convenience of these objects in memory, but it really has very little to do with performance. You could, of course, argue that you have multiple forms of the code in memory due to JITing, but that's negligible compared to the data handled by these applications.
  • Re:The Greatest Idea (Score:5, Interesting)

    by totally bogus dude ( 1040246 ) on Saturday August 23, 2008 @01:27AM (#24716005)

    Sharepoint 2007 is a good example. Editing of the content is via a browser-based interface, which is quite script heavy. What's interesting is just how script heavy it is. While testing on an old laptop we have connected to an external link, I was a bit dismayed at how slow loading our site was. I got the impression that the browser was pausing before displaying the page for some reason.

    Opening up task manager, I saw that before IE displayed the page, it would spin on 50% CPU (on an old hyperthreaded P4) for over 5 seconds before finally rendering the page. After some experimenting which yielded consistent results, I tried Firefox and the difference was dramatic, to say the least.

    The upshot of all this is that we may need to recommend to our clients that they use Firefox to edit their Sharepoint 2007 sites, because it provides a significantly better experience than IE does if you have older hardware. On my own desktop at work (a reasonably modern Core 2 Duo) IE does spike the CPU usage, but generally it's for less than a second or two so it's not really distracting. Firefox is faster, but both are quick enough that it doesn't make a real difference to a human.

    Completely off-topic: I used refreshes of the task manager process listing to judge how long IE was spinning for. I always assumed the default setting was to refresh the list once per second, and some quick testing now confirms that that is what it does. If you go to View -> Update Speed, the default setting is "Normal". The status tip for this setting says "Updates the display every two seconds". Clearly a lie - or is it? If you select "Normal", then the display does in fact update every two seconds, and there doesn't seem to be any way to get it to go back to refreshing once per second.

  • by pinkfloydhomer ( 999075 ) on Saturday August 23, 2008 @01:32AM (#24716035)

    About the memory, you are correct (for now).

    About the speed:

    I made a benchmark written in a pure C subset of C/C++/C#/Java in all of these languages. A simple benchmark involving calculations with integers (primes) and floating point numbers (sums of products of square roots of primes). The result, when running a bazillion iterations:

    C# and Java took 50 seconds, C++ took 49 seconds and C took 51 seconds. Other benchmarks I made showed similar results.

    So for pure calculating, C#/Java + JIT is equally fast. For big real-life systems involving a lot of other stuff, the results might be different.

    But it is a long time ago that Java et. al. were 3 times slower than native code.

  • by shaitand ( 626655 ) on Saturday August 23, 2008 @01:32AM (#24716039) Journal

    'actually, its written in an obscure dialect of C++, developed when Netscape ran on a dozen various platforms'

    Really? I was under the impression that the core of Firefox 1.0 was a complete rewrite because the developers determined that the old Netscape stuff was a mess that wasn't worth moving forward with.

  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Saturday August 23, 2008 @01:36AM (#24716061) Homepage

    It's true that dynamic typing is a clock-cycle hog, but almost all new languages use it... I'd be perfectly happy with a strongly-typed variant of Javascript if it will run faster.

  • by H3g3m0n ( 642800 ) on Saturday August 23, 2008 @02:47AM (#24716385) Homepage Journal
    Makes sense, by compiling for a specific architecture you can get a %5-%10 performance increase. If the JIT compiler uses less than the %5-%10 and can optimise for the arch then its a saving. Of course it won't be a saving over compiling the code specifically for the architecture in the first place, but it is rare to see that (Gentoo'ers and other build from source distros). And multi-core processors are going to further reduce the speed since you can just offload the JIT to one of the other cores (of several if its threaded) and since few programs are very well threaded it shouldn't be a problem (and the kinds of things that are well threaded generally are longer number crunching tasks so the initial compile will be a small amount of the overall speed). In addition to that you might find other cool things like the ability to thread non-threaded code in some instances. For instance there is no reason for a basic function that's return value isn't used for a fairly long time not to be threaded, you can sort a list while you continue pass it on through the rest of the program as long as you don't actually need to read the content to decide where to send it or try and do something to it. As for memory, ram is cheap enough now that it shouldn't be a problem, 4GB will be fairly standard soon. Ubuntu already burns around 500Mb for me doing nothing.
  • Re:The Greatest Idea (Score:4, Interesting)

    by moonbender ( 547943 ) <moonbenderNO@SPAMgmail.com> on Saturday August 23, 2008 @09:15AM (#24717931)

    On the off-topic note: Don't even bother thinking about the task manager, just download the Process Explorer [microsoft.com] and set it to replace the task manager. It's light weight and vastly superior to the task manager in every way. One of the utils I miss in Linux.

  • by gbjbaanb ( 229885 ) on Saturday August 23, 2008 @09:56AM (#24718207)

    you want "memory allocation voodoo" in a lower level language? easy. overload the new operator in C++ and you're done. We did this for a very very fast, very very scalable system ages ago (back when 900mhz CPUs were teh win). You basically pre-allocate a pool of fixed-size blocks (eg 16 bytes, 32 bytes etc) then grab the first free one off the relevant pool when you need an object. And without the overhead of a garbage collector too!

  • by gbjbaanb ( 229885 ) on Saturday August 23, 2008 @10:02AM (#24718249)

    Programming functional style (and limiting side effects) makes this task easier.

    Programming functional style (and NO side effects) makes this task easier. With multi-threaded apps, its an all-or-nothing approach. Some side-effects will ruin your week in a difficult to reproduce (let alone debug) way.

    Even then, I think you'll find its next to impossible to achieve without programmer hints to the compiler, but then you might as well make those hints into thread features of the language and let the programmer write threaded code.

  • by smallfries ( 601545 ) on Saturday August 23, 2008 @11:04AM (#24718713) Homepage

    Damn. This is going to undo my mods in this discussion.

    Unfortunately your argument has a hole in it at this point. I was just about to mod your earlier posts insightful but I thought I'd correct you instead. If you write your JIT compiler in C then it takes the Javascript as input and outputs native code. This glosses over the interactive nature of the JIT compiler but is largely true. The compiler does not execute code in the language that it is written in. It executes code in the language that it is emitting. The language being emitted is the native assembly language. So you do not have a C program that does the same thing as the same speed. You have a C program that generates code, which when run does the same thing. For this reason the output of the JIT can be faster than C, even if the JIT is written in C.
     

  • C is inefficient (Score:5, Interesting)

    by speedtux ( 1307149 ) on Saturday August 23, 2008 @11:16AM (#24718803)

    You are ABSOLUTELY wrong! C# by its very nature can not be as fast as C.

    The C# JIT has all the information that a C compiler has (essentially, the entire source code). In addition, it has a lot of global program information and it has runtime statistics. And, the C# language has better defined semantics. All of this taken together means that C# can be optimized better than C.

    In terms of performance, C is a lousy language; Fortran is a "faster language" than C.

    The only reason C even runs as well as it does is because people have invested 20 years in making compilers squeeze out the last cycle, because C compilers play fast and loose with C semantics at high optimization, and because even CPUs have been tuned to accommodate its semantics.

  • by Anonymous Coward on Saturday August 23, 2008 @01:15PM (#24719623)

    At a previous job, we experimented with the profile guided optimizations provided by Microsoft's Visual C++ compiler. The results were *AMAZING*.

    As part of our release build process, we had a program that included a few traces of our common usage scenarios through our core computation bottleneck path. We got anywhere from 0% to 20% performance improvement.

    *This* is the theoretical JIT advantage. Profile guided optimization. It is available statically today. Having JITs match this will be very difficult, but in a way it seems like the HDD vs. SSD debate. Disks are always getting better, but it seems like SSDs will take over at the end of the day. A matter of 'when', not 'if'.

  • by mdmkolbe ( 944892 ) on Saturday August 23, 2008 @03:27PM (#24720611)

    The overhead of dynamic types is quite immense

    That is a common myth but unless you call a factor of 2.0 immense, it it wrong. The truth is that the cost is immense only in the most naive implementations (*cough* ruby *cough). For example, while the compiler has to insert type checks whenever you do something like "+", with a well designed implementation that uses pointer tagging that should cost at most one bitwise-and and one branch-if-zero (with modern CPU pipelines that should cost a fraction of a cycle since the branch should predict fail and in the fail case there is no data dependency). In addition a well designed system will be analyzing the code and eliminate/commonize those checks whenever possible. Finally for really high performance some implementations allow you to run in unsafe mode where those checks are turned off (not applicable to untrusted JavaScript code in a browser but in other applications may be useful).

    We could spend all day debating the details of this but in reality it all comes down to the numbers so lets talk numbers. Looking at the programming language shootout [debian.org], I see Python Psyco, Scheme Ikarus, and Lisp SBCL as implementations of dynamically typed languages. Their performance relative to Intel C are 6.2, 5.9 and 2.0.

    Bottom line, will dynamically typed languages always be slower than statically typed languages? Yes, but in good implementations it should only be a small (2-6) factor.

    (Note, that Ikarus doesn't even do any heavy code analysis or optimization. It's just a good straight implementation of the language. I note this in case some cry foul that SBCL uses programmer supplied type hints and Psyco uses JIT specialization.)

  • by Anonymous Coward on Saturday August 23, 2008 @03:27PM (#24720617)

Work without a vision is slavery, Vision without work is a pipe dream, But vision with work is the hope of the world.

Working...