Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Chrome Google Technology

Chrome 10 Beta Boosts JavaScript Speed By 64% 169

CWmike writes "Google released the first beta of Chrome 10 on Thursday, and Computerworld found it to be 64% faster than its predecessor on Google's V8 JavaScript benchmarks. But in another JS benchmark — WebKit's widely-cited SunSpider — Chrome 10 beta was no faster than Chrome 9. Yesterday's Chrome 10 beta release was the first to feature 'Crankshaft,' a new optimization technology. Google engineers have previously explained why SunSpider scores for a Crankshaft-equipped Chrome show little, if any, improvement over other browsers. 'The idea [in Crankshaft] is to heavily optimize code that is frequently executed and not waste time optimizing code that is not,' said the engineers. 'Because of this, benchmarks that finish in just a few milliseconds, such as SunSpider, will show little improvement with Crankshaft. The more work an application does, the bigger the gains will be.' [Chrome 10 beta download here.]"
This discussion has been archived. No new comments can be posted.

Chrome 10 Beta Boosts JavaScript Speed By 64%

Comments Filter:
  • by tapo ( 855172 ) on Saturday February 19, 2011 @12:12AM (#35251806) Homepage

    Google is doing this with Native Client [wikipedia.org]. It allows a browser to run code compiled for x86, ARM, or LLVM bytecode in a sandbox. It's currently in beta in Chrome 10 (you can enable and try it out by going to about:flags), and apparently available for other browsers as well under the BSD license.

  • by BZ ( 40346 ) on Saturday February 19, 2011 @12:19AM (#35251836)

    Modern JS jits (tracemonkey, crankshaft) seem to be able to get to within about a factor of 10 of well-optimized (gcc -O3) C code on simple numeric stuff. That's about equivalent to C code compiled with -O0 with gcc, and actually for similar reasons if you look at the generated assembly. There's certainly headroom for them to improve more.

    For more complicated workloads, the difference from C may be more or less, depending. I don't have any actual data for that sort of thing, unfortunately.

    There _are_ things "wrong" with javascript that make it hard to optimize (lack of typing, very dynamic, etc). Things like http://blog.mozilla.com/rob-sayre/2010/11/17/dead-code-elimination-for-beginners/ [mozilla.com] (see the valueOf example) make it a bit of a pain to generate really fast code. But projects like https://wiki.mozilla.org/TypeInference [mozilla.org] are aiming to deal with these issues. We'll see what things look like a year from now!

  • by spinkham ( 56603 ) on Saturday February 19, 2011 @12:37AM (#35251918)

    11.0.672.2 is a Dev channel release. Call it "alpha" if you like. http://googlechromereleases.blogspot.com/2011/02/dev-channel-update_17.html [blogspot.com]

    There are 3 release channels: Stable, Beta, and Dev.

  • by TopSpin ( 753 ) on Saturday February 19, 2011 @12:55AM (#35252010) Journal

    Does anybody know where we are with Javascript now?

    There is always the The Computer Language Benchmarks Game [debian.org]. There you will find V8 competitive with Python, Ruby and other such languages, which is to say slower than the usual compiled suspects by about the same degree.

  • Re:WTF Beta! (Score:4, Informative)

    by 517714 ( 762276 ) on Saturday February 19, 2011 @01:26AM (#35252118)
    We tend to shoot for clever or snarky instead of meaningful;)
  • by sydneyfong ( 410107 ) on Saturday February 19, 2011 @01:39AM (#35252172) Homepage Journal

    Are you sure about this?

    I don't recall gcc -O3 and -O0 having a factor of 10 difference for most tasks. And Javascript definitely isn't close to C performance, even unoptimized.

    Besides, gcc -O3 can actually be somewhat slower than -O2, which is why almost nobody uses -O3 except for the Gentoo zealots.

  • by BZ ( 40346 ) on Saturday February 19, 2011 @03:04AM (#35252362)

    I'm very sure, yes.

    > I don't recall gcc -O3 and -O0 having a factor of 10
    > difference for most tasks.

    They don't. My comment was quite specific: the cited numbers are simple number-crunching code. The fact that -O0 stores to the stack after every numerical operation while -O3 keeps it all in registers is the source of the large performance difference as long as you don't run out of registers and such. The stack stores are also the gating factor in the code generated by Tracemonkey, at least.

    > And Javascript definitely isn't close to C
    > performance, even unoptimized.

    For simple number-crunching code, Tracemonkey as shipping in Firefox 4 runs at the same speed as C compiled with GCC -O0, in my measurements. I'd be happy to point you to some testcases if you really want. Or do you have your own measurements that you've made that are the basis for your claim and that you'd care to share?

    Note that we're talking very simple code here. Once you start getting more complicated the gap gets somewhat bigger.

    As an example of the latter, see https://github.com/chadaustin/Web-Benchmarks [github.com] which has multiple implementations of the same thing, in C++ (with and without SIMD intrinsics) and JS with and without typed arrays. This is not a tiny test, but not particularly large either.

    On my hardware the no-SIMD C++ compiled with -O0 gives me about 19 million vertices per second. The typed-array JS version is about 9 million vertices per second in a current Firefox 4 nightly.

    For comparison, the same no-SIMD C++ at -O2 is at about 68 million vertices per second. -O3 gives about the same result as -O2 here; -O1 is closer to 66 million.

    > Besides, gcc -O3 can actually be somewhat
    > slower than -O2

    Yes, it can, depending on cache effects, etc. For the sort of code we're talking about here it's not (and in fact typically -O2 and -O3 generate identical assembly for such testcases. See the numbers above.

    One other note about JS performance today: it's heavily dependent on the browser and the exact code and what the jit does or doesn't optimize. In particular, for the testcase above V8 is about 30% faster than Spidermonkey on the regular array version but 5 times slower on the typed array version (possibly because they don't have fast paths in Crankshaft yet for typed arrays, whereas Spidermonkey has made optimizing those a priority).

    Again, I suspect that things will look somewhat different here in a year. We'll see whether I'm right.

  • by Anonymous Coward on Saturday February 19, 2011 @03:09AM (#35252372)

    One problem is that usually websites contain Javascript from ad sites which can't be cached because they want to track every hit. Additionally, since scripts are allowed to do stuff like mess with the prototypes for built-in objects which will affect any code loaded after it, if any of the files are changed you probably have to throw away any precompiled code.

    One the page is loaded, most Javascript engines will try to optimize code that gets run frequently, which is good when you're running a rich Javascript application. It won't necessarily help initial page load times though.

With your bare hands?!?

Working...