Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Chrome Google Technology

Chrome 10 Beta Boosts JavaScript Speed By 64% 169

CWmike writes "Google released the first beta of Chrome 10 on Thursday, and Computerworld found it to be 64% faster than its predecessor on Google's V8 JavaScript benchmarks. But in another JS benchmark — WebKit's widely-cited SunSpider — Chrome 10 beta was no faster than Chrome 9. Yesterday's Chrome 10 beta release was the first to feature 'Crankshaft,' a new optimization technology. Google engineers have previously explained why SunSpider scores for a Crankshaft-equipped Chrome show little, if any, improvement over other browsers. 'The idea [in Crankshaft] is to heavily optimize code that is frequently executed and not waste time optimizing code that is not,' said the engineers. 'Because of this, benchmarks that finish in just a few milliseconds, such as SunSpider, will show little improvement with Crankshaft. The more work an application does, the bigger the gains will be.' [Chrome 10 beta download here.]"
This discussion has been archived. No new comments can be posted.

Chrome 10 Beta Boosts JavaScript Speed By 64%

Comments Filter:
  • by WrongSizeGlass ( 838941 ) on Friday February 18, 2011 @11:00PM (#35251740)
    Will it make the new /. work any faster ... or better ... or anything?
  • by fuzzyfuzzyfungus ( 1223518 ) on Friday February 18, 2011 @11:08PM (#35251790) Journal
    Historically, slashdot(and elsewhere) has seen the battle over performance between the C/C++ classicists, and those who insist that Java or one of its architecturally similar cousins has, with enough work on the JVM, achieved nearly equivalent execution speed.

    Does anybody know where we are with Javascript now? Traditionally, its performance has been pathetic, since it wasn't all that heavily used; but of late competition to have a better javascript implementation has been pretty intense. Is there anything fundamentally wrong with the language, that will doom it to eternal slowness, or is it on the trajectory to near-native speeds eventually?
    • by BZ ( 40346 ) on Friday February 18, 2011 @11:19PM (#35251836)

      Modern JS jits (tracemonkey, crankshaft) seem to be able to get to within about a factor of 10 of well-optimized (gcc -O3) C code on simple numeric stuff. That's about equivalent to C code compiled with -O0 with gcc, and actually for similar reasons if you look at the generated assembly. There's certainly headroom for them to improve more.

      For more complicated workloads, the difference from C may be more or less, depending. I don't have any actual data for that sort of thing, unfortunately.

      There _are_ things "wrong" with javascript that make it hard to optimize (lack of typing, very dynamic, etc). Things like http://blog.mozilla.com/rob-sayre/2010/11/17/dead-code-elimination-for-beginners/ [mozilla.com] (see the valueOf example) make it a bit of a pain to generate really fast code. But projects like https://wiki.mozilla.org/TypeInference [mozilla.org] are aiming to deal with these issues. We'll see what things look like a year from now!

      • Re: (Score:2, Informative)

        by sydneyfong ( 410107 )

        Are you sure about this?

        I don't recall gcc -O3 and -O0 having a factor of 10 difference for most tasks. And Javascript definitely isn't close to C performance, even unoptimized.

        Besides, gcc -O3 can actually be somewhat slower than -O2, which is why almost nobody uses -O3 except for the Gentoo zealots.

        • by BZ ( 40346 ) on Saturday February 19, 2011 @02:04AM (#35252362)

          I'm very sure, yes.

          > I don't recall gcc -O3 and -O0 having a factor of 10
          > difference for most tasks.

          They don't. My comment was quite specific: the cited numbers are simple number-crunching code. The fact that -O0 stores to the stack after every numerical operation while -O3 keeps it all in registers is the source of the large performance difference as long as you don't run out of registers and such. The stack stores are also the gating factor in the code generated by Tracemonkey, at least.

          > And Javascript definitely isn't close to C
          > performance, even unoptimized.

          For simple number-crunching code, Tracemonkey as shipping in Firefox 4 runs at the same speed as C compiled with GCC -O0, in my measurements. I'd be happy to point you to some testcases if you really want. Or do you have your own measurements that you've made that are the basis for your claim and that you'd care to share?

          Note that we're talking very simple code here. Once you start getting more complicated the gap gets somewhat bigger.

          As an example of the latter, see https://github.com/chadaustin/Web-Benchmarks [github.com] which has multiple implementations of the same thing, in C++ (with and without SIMD intrinsics) and JS with and without typed arrays. This is not a tiny test, but not particularly large either.

          On my hardware the no-SIMD C++ compiled with -O0 gives me about 19 million vertices per second. The typed-array JS version is about 9 million vertices per second in a current Firefox 4 nightly.

          For comparison, the same no-SIMD C++ at -O2 is at about 68 million vertices per second. -O3 gives about the same result as -O2 here; -O1 is closer to 66 million.

          > Besides, gcc -O3 can actually be somewhat
          > slower than -O2

          Yes, it can, depending on cache effects, etc. For the sort of code we're talking about here it's not (and in fact typically -O2 and -O3 generate identical assembly for such testcases. See the numbers above.

          One other note about JS performance today: it's heavily dependent on the browser and the exact code and what the jit does or doesn't optimize. In particular, for the testcase above V8 is about 30% faster than Spidermonkey on the regular array version but 5 times slower on the typed array version (possibly because they don't have fast paths in Crankshaft yet for typed arrays, whereas Spidermonkey has made optimizing those a priority).

          Again, I suspect that things will look somewhat different here in a year. We'll see whether I'm right.

        • http://shootout.alioth.debian.org/u32/which-programming-languages-are-fastest.php [debian.org]

          According to this benchmark V8 is 4x slower than C/C++. From my perspective that's "somewhere close" to C's performance. Compared to the 10x+ bench for most scripting languages including other Javascript implementations, it's pretty darn respectable. And if they just improved it by another 60% for many applications, that's a big deal..

      • by Alef ( 605149 )

        There _are_ things "wrong" with javascript that make it hard to optimize (lack of typing, very dynamic, etc).

        To get a notion of where it is possible to get with a similarly dynamic language, take a look at how the LuaJIT benchmarks [debian.org] compare with optimized C++ and other dynamic languages. Notice it is not far behind a state-of-the-art Java VM.

        Another pretty interesting aspect is this code size versus speed [debian.org] comparison.

      • Your comparison to gcc optimization levels is apples and oranges, it's a bit misleading and I'd be interested in hearing where you got your information. As a relative comparison I do understand it, and by rough guessing I'd say you sound close to accurate in terms of numbering for single threaded code. Thank you for pointing out how lack of typing and maintaining dynamic nature (dynamic "this" context!? F* you ECMA that's terrible!) enhances the crap factor and makes it much harder to deal with making optim
        • by BZ ( 40346 )

          > Your comparison to gcc optimization levels is apples
          >. and oranges,

          Well... any inter-level comparison is, to some extent.

          > it's a bit misleading

          How so?

          > I'd be interested in hearing where you got your
          > information.

          Which information? The performance numbers? I wrote some code, and then timed how long it takes to run: the usual way. ;)

          > Getting wider spread support of different types of
          > scripts more universally accepted and adding
          > proper DOM handling libraries and runtime
          > iso

          • by BZ ( 40346 )

            > Well... any inter-level comparison is, to some extent.

            I meant inter-language.

      • by pz ( 113803 )

        well-optimized (gcc -O3) C code

        Thinking that gcc produces well-optimized code is a nice sunny view to have, but does not align with reality in my experience. I, too, used to think that gcc was the best compiler out there, mostly because I had not done any head-to-head comparisons, and was echoing what everyone else said.

        Then, I had to write some high-performance C code. I tried everything I could get my hands on. I used every source code transformation and technique I knew. For this application, the more performance I could wring out

        • results vary widely. Try using gcc on SPARC sometimes and prepare to be really unimpressed, at least as compares to acc.

        • by BZ ( 40346 )

          Oh, I don't think gcc is the best compiler by any means, especially since I have good data to the contrary. As you say, MSVC has much better code generation.

          But on simple numeric benchmarks, GCC does produce pretty good code, as does any other sane compiler. The point being that these benchmarks are _simple_ and hence easy to optimize.

    • by NoSig ( 1919688 ) on Friday February 18, 2011 @11:19PM (#35251840)
      Java isn't a dynamic language which is the central difference that makes languages like Javascript and Python much slower than C++ and even Java with the compiler technology as it is now and for the forseeable future. A big still relevant problem with Java is the long loading times you end up with starting up large applications. Javascript isn't even compiled to bytecode so that problem would be much worse if big applications were written and run as Javascript. Javascript is getting faster all the time but don't expect anything like C++ or even Java for general purpose programming. Which is fine because that isn't what Javascript is all about.
      • by fuzzyfuzzyfungus ( 1223518 ) on Friday February 18, 2011 @11:25PM (#35251868) Journal
        Given that browsers tend to cache website elements, for better speed when loading objects that haven't changed since last load, and given that, while people want their page now, their computer usually has a fair amount of idle time available, would you expect to see browsers implementing some sort of background optimization mechanism that chews over cached javascript during idle periods in order to reduce the amount of computationally expensive work that needs to be done should the page be reloaded? Or is Javascript not amenable to that level of preemptive processing?
        • Did you just suggest locally "precompiling" javascript with idle client cpu cycles? (not sarcasm. I think it's a great idea if that is the case). Can you event "precompile" Javascript currently, and if not, why not? /disclaimer: I have a very light understanding of Javascript, more of a infrastructure/networking guy. Go easy.

          • I was asking if that were practical, since I don't know much about the guts of this stuff. TFA's mention of optimizing code that runs frequently, and not optimizing rarely used code, gave me the impression that there is some sort of technique, presumably a species of JIT compilation, that is quite computationally expensive; but makes the code subjected to it run faster thereafter. NoSig's comment about load times suggested a similar tradeoff.

            This struck me(on naive first inspection) as being something th
        • Re: (Score:2, Informative)

          by Anonymous Coward

          One problem is that usually websites contain Javascript from ad sites which can't be cached because they want to track every hit. Additionally, since scripts are allowed to do stuff like mess with the prototypes for built-in objects which will affect any code loaded after it, if any of the files are changed you probably have to throw away any precompiled code.

          One the page is loaded, most Javascript engines will try to optimize code that gets run frequently, which is good when you're running a rich Javascrip

        • by NoSig ( 1919688 )
          Certainly some kind of caching of JIT output should be helpful in some way. There are numerous issues that limit how helpful it can be. For one, it hasn't solved the Java issue and Java is much more amenable to this kind of thing than Javascript is. For one thing Javascript often makes heavy use of document.write which is that Javascript will dynamically write more Javascript to be run later. So the code being run can change from one page load to another even if the code is initially the same, defeating cac
    • V8 compiles javascript to native code, so in that sense it is native speed. The limiting factor is interacting with the HTML DOM/CSS.
      • by BZ ( 40346 )

        gcc compiles C to native code, but there's a noticeable speed difference between compiling with -O0 and -O2 for most C code.

        There are plenty of things people are doing in JS nowadays where the DOM is only a limiting factor because JS implementations are 10x faster than they were 4 years ago...

    • by TopSpin ( 753 ) on Friday February 18, 2011 @11:55PM (#35252010) Journal

      Does anybody know where we are with Javascript now?

      There is always the The Computer Language Benchmarks Game [debian.org]. There you will find V8 competitive with Python, Ruby and other such languages, which is to say slower than the usual compiled suspects by about the same degree.

    • by thoughtsatthemoment ( 1687848 ) on Saturday February 19, 2011 @01:17AM (#35252258) Journal

      There are indeed a few fundamental issues with Javascript that make it both useful for coding and at the same itime hopeless toreplace something like C.

      In javascript, accessing the property of an object requires a lookup, and some checking to make sure things exist. Compared to accessing a field in a C struct, that's a lot of overhead (AFAIK, google does do heave optimazation in this area). The reason for doing that is for safety and being dynamic.

      In a large application, ultimately performance comes from memory management. The best way is using memory and resource pooling, fine tuned by the programmer. I doubt javascript can be efficiently used this way. I don't think javascript can be used to code Word or a browser (I mean the browser itself) any time soon.

      Multithreading is also an issue. There is not really anything wrong with the language. It's more of an implementation issue.

      • The most fundamental problem with Javascript is the fact that modern browsers encounter JS code, compile it into machine code, and run that.

        JS was initially interpreted and therefore the code could be limited in function. Machine code is not limited in function...

        What if C had been chosen instead of JS? I wouldn't want my browser compiling to machine code and running any C code that any website (or 3rd party plugin for that website) delivered to my browser... Why do we allow JS Engines to do this?

    • I don't know the latest benchmarks, but most programs arn't app. code bound anyway - they depend more on graphics, I/O (disk, network), external servers (web), etc.

      Where pure CPU speed is an issue a language like C/C++ that was designed to map 1:1 to hardware always has a potential edge at least in allowing a human to cleverly code something, but optimizers are getting better all the time and a JIT compiler has the potential benefit of run-time information which a pre-compiled binary doesn't (not that a pre

    • Both Java and C/C++ are strongly typed languages, which give a lot of information to the compiler and (in the case of Java) runtime. The question here is how much optimization people can do on a loosely typed language like JavaScript... Apparently they can do quite a bit because JS today is screaming faster than a few years ago.

      You would expect that, all things being equal, the languages with a runtime (including JavaScript) should beat out those without because they can do things that you can't do static

    • http://shootout.alioth.debian.org/u32/which-programming-languages-are-fastest.php [debian.org]

      Javascript V8 (v9 presumably) according to this benchmark is 4x slower than C. Not bad at all, IMO.

  • by Jackie_Chan_Fan ( 730745 ) on Friday February 18, 2011 @11:09PM (#35251794)

    Take that! This Chrome goes to EeeeeLeven....

  • by schwit1 ( 797399 ) on Friday February 18, 2011 @11:30PM (#35251888)

    v11.0.672.2 is also a beta.
    http://www.filehippo.com/download_google_chrome/ [filehippo.com]

  • by smash ( 1351 )

    ... just goes to show the abysmally sup-optimal implementations of javascript we've been living with for the past decade or so.

    Chrome was already way faster than anything else (particularly upon first chrome release, it was like 10x faster than IE i thought?).

    Surely some time soon we're going to stop seeing double-digit percentage improvements in performance, or were the original javascript implementations really THAT BAD?

    • 10x faster than IE8, much less IE7, is not an accomplishment worth mentioning. On the other hand, 50% faster than IE9 would be very impressive indeed - the RC is effectively at the top of the speed charts (on Sunspider at least) right now.

      • by n0-0p ( 325773 )

        If you're using SunSpider as your sole benchmark then you're already behind. SunSpider has outlived its usefulness (which the article touches on). In order to get a win of a few hundredths of a percent on SunSpider you have to add in premature optimizations that hurt page-load times and the performance of long running JavaScript applications. (Or you could add some dubious optimizations [zdnet.com] that are targeted specifically to the test, but that sounds a bit like cheating on a benchmark to me.)

        SunSpider was good f

  • Does all this JS optimization happen on loading a page?

    I've noticed pages freezing up longer now, and this is my only guess as to why.

    If this is the case, do these benchmarks take account of this?

    Fast execution is great, but not at the price of waiting for it.

    • Most of these newfangled JavaScript engines are simply JIT compilers, so yes, the compilation time takes some time which usually means the page loads slower. All those "OnLoad" statements have to be parsed and compiled before they can run faster than they could before.
      Ideally you don't notice it cause your awesome new CPU is fast enough to make it a non issue. If you didn't upgrade your PC (or mobile) last week but last decade, you might have a problem tho.

      • Does this help?
        http://headjs.com/ [headjs.com]

        Like they say, loading time is crucial to the sense of speed, and with these new browsers I was really expecting the heavy JS to speed up... Instead the heavy JS sites would freeze for a while. Very annoying. Most sites are OK though. Ebay is by far the worst.

  • IMHE(xperience), Chrome loads pages slower than Firefox with NoScript. Here's why. FF can load partial pages better. By this, I mean FF can load pages with missing or incomplete elements better. While FF will, for example, happily show me a page that is badly formatted because the style sheet hasn't been fully loaded, all that Chrome will show me is a big blank page until it can place the elements correctly on the page. To be sure, FF will dynamically reformat the badly formatted page as the page requisites

    • by afidel ( 530433 )
      Not sure about that as noscript is a bit draconian, but Chrome 9 and 10 with adblock are both faster than FF 4b10 with adblock.
    • Mod parent up. Firefox is vastly preferable if you are trying to access the network behind a slow connection, like a GPRS cellphone for example. With Chrome you have to wait until everything is loaded before you are able to see the page, whereas Firefox does a decent job trying to render what it has loaded up to the present point.

  • How fast will Chrome 10 block an ISO H.264 video as it tries to get from HTML to my video playback hardware? I heard they are working on getting that up to instantaneously.

  • Dear /. taggers: Google doesn't care about version number at all. It is just an arbitrary number signifying their major releases (which happens every six weeks or so). Want to know how little Google cares about version number? Go to google.com/chrome . Try to find a version number. Go to Google's Chrome blog ( http://chrome.blogspot.com/ [blogspot.com] ). Try to find a version number. Google's NEVER promoted a new release of Chrome with the version number. The only people to do so are other sites who are seemingly compell
    • Dear /. taggers: Google doesn't care about version number at all. It is just an arbitrary number signifying their major releases

      Yes, that is why we call it version inflation. They are meaningless bumps in version number that correspond to nothing. HTH, HAND.

  • The idea [in Crankshaft] is to heavily optimize code that is frequently executed and not waste time optimizing code that is not,' said the engineers.

    So even if some of the code is only used on rare occasions, how is it smart to only optimize some of the code instead of all of it? And if the argument is "it takes longer to optimize the code than it does to run it" then one would have to wonder if it takes longer to decide what to optimize than it does to run it.

    • by n0-0p ( 325773 )

      Your guess is correct; for rarely followed code paths it does take significantly longer to (aggressively) optimize the code than it does to run it. Also, premature optimization can generate pathologically suboptimal code, meaning the performance can be much worse than the unoptimized case.

      My understanding of how Crankshaft works is that the emitted code keeps some basic information about the data and frequency for any given code path (it's probably function level, but I don't know the code so I can't say fo

  • Bugger! Now i only have time to make a coffee while a slashdot page loads - i used to be able to make dinner!

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...