Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Chrome Android IT Technology

Chrome for Android is Finally Going 64-bit, Giving it a Speed Boost in Benchmarks (androidpolice.com) 46

An anonymous reader shares a report: The first Android version to support 64-bit architecture was Android 5.0 Lollipop, introduced back in November 2014. Since then, more and more 64-bit processors shipped, and today, virtually all Android devices are capable of running 64-bit software (excluding one or two or more oddballs). However, Google Chrome has never made the jump and is only available in a 32-bit flavor, potentially leading to some unnecessary security and performance degradations. That's finally changing: Starting with Chrome 85, phones running Android 10 and higher will automatically receive a 64-bit version. A look at chrome://version confirms as much: The current stable and beta builds, version 83 and 84, note that they're still 32-bit applications. Chrome Dev and Chrome Canary (release 85 and 86) are proper 64-bit apps. Google confirms as much on its Chromium Bugs tracker.

When compared in a number of Octane 2.0 benchmarks, the 64-bit version got consistently better results than the 32-bit version. It's possible that there have been other optimizations that make Chrome 85 faster than 83 -- the architecture is not necessarily all there is to it. Still, the benchmark results suggest that there are some enhancements, even if these tests aren't easy to translate to real-world usage.

This discussion has been archived. No new comments can be posted.

Chrome for Android is Finally Going 64-bit, Giving it a Speed Boost in Benchmarks

Comments Filter:
  • Hence the observed speed boosts are very likely due to other optimizations.

    • by tlhIngan ( 30335 ) <slashdot.worf@net> on Saturday July 04, 2020 @06:47AM (#60260530)

      Hence the observed speed boosts are very likely due to other optimizations.

      More efficient architecture, actually. AArch64 is a much leaner and cleaner architecture that is much more optimized for modern processor designs taking advantage of things like superscalar execution.

      The speed difference is rather striking - the same 32-bit (AArch32) code running on ARMv7 based architectures gain about a 10% speedup going to ARMv8. However, if you can recompile it for AArch64, you'd see a 50% speedup.

      And if you can control the microarchitecture, you can achieve even greater speedups. It's why the iPhone 5S (the first iPhone with a 64-bit processor) was easily running things 2 times faster than the iPhone 5 - the 64-bit code speedup plus AArch64 optimizations ensures the same code can run a lot faster.

      Granted, you lose some niceties that made ARM nice - AArch64 no longer supports conditional execution on every instruction - doing so resulted in a lot of pipeline stalls and wasted speculative execution because of instruction dependencies that resulted in wasted power and lowered speed because you're wasting more resources speculatively executing

      • Uum, you could have separate pathways for both if you wanted. A fast one, and a conditional one. With no delays unless you switch over.

      • by makomk ( 752139 )

        The reason that iPhones gained so much from switching to 64-bit is actually due to specific quirks of Apple's Objective-C implementation - the extra unused pointer bits are used to store information that isn't actually a pointer, such as reference counts and type IDs.

        • How can you store reference count in pointers if there's multiple pointers to the same object? You'd have to update all pointers to the same object every time the reference count changes.
          • Duh. What do you think Apple uses all those unicorns for? To provide the blood for the quantum-entangled ref counts in their pointers!
          • The pointer with the reference count isn't the pointer to the object. It's the pointer in the "isa" field of the object which refers to the object's class. In the 32-bit ABI, objects don't carry reference counts in the objects themselves, it is stored in a separate hash table (which has to be locked prior to access). It's a legacy holdover from a time when memory was very scarce and putting a reference count into every object would be way too expensive. https://www.mikeash.com/pyblog... [mikeash.com]
    • Really? Why should 64 bit be slower? Do you even understand how processors work?

      • by enriquevagu ( 1026480 ) on Saturday July 04, 2020 @07:56AM (#60260628)

        Apparently he understands it better than you. Larger addresses means larger pointers and increased cache utilisation, particularly for pointer-intensive applications (e.g. Java, graphs, etc.). The resulting reduced effective cache space reduces memory hit ratio and increases average memory access time, lowering performance.

        Employing a 64-bit ISA with 32-bit pointers is typically more memory-efficient (and faster) than using complete 64-bit addresses, as long as a single application does not employ more than 4GB of memory (which means most applications).

        A different aspect is cores that employ a different microarchitecture for 32 or 64 bit ISAs; performance differences in such case does not depend on the ISA but on the microarchitecture.

        • Pointers can be whatever size you make them. Even Java offers you the UseCompressedOops flag.
          • by gweihir ( 88907 )

            Pointers can be whatever size you make them. Even Java offers you the UseCompressedOops flag.

            Actually making some things 64 bit by default that are actually used is part of compiling for 64 bit. Otherwise it is called "compiling for 32 bit".

            • Implementations of high-level languages can choose to do whatever they want to. It's the 21st century; we do have the techniques for making smart compilers. "Compiling for 64 bit" doesn't mean you have you make every piece of data in memory a 64-bit one, that would be awful.
        • as long as a single application does not employ more than 4GB of memory

          Technically, even that is not true in languages with static disjoint types and/or data alignment. Disjoint types would allow you to use multiple 4GB regions, whereas accommodation of data alignment would allow you to multiply the sizes of these regions by small powers of two.

        • by Junta ( 36770 )

          While what you say is possible (e.g. the x32 abi on x86_64 using 32-bit pointers), I don't know that the work has gone into making such a thing an option under aarch64. In fact, on the x32 abi the level of interest is so low there has been talk of deprecated it and not bothering.

          All the other changes apart form extending the pointer size are going to bring the mentioned performance benefit. While there may be some room for an even better more efficient build with significant work, already you have phones wi

        • Your cache point is a niche case far outweighed in graphics by having 64 bit ints and not having to rely on large number libraries. Also if the data bus is also 64 bit then you can load twice as much data into a register in one access which is useful for structures and unions used for packet processing in network systems. Perhaps you never wondered why CPUs such as Alpha were 64 bit when a 4GB system memory was still science fiction.

          • by Bengie ( 1121981 )
            1) You're not loading 2x as much data into a register, you're loading more range into a register. If all you need is an integer and a 32bit will suffice, 64bit adds overhead, but it in energy consumption or performance penalty, with no benefit.
            2) If you're actually treating 64bits of data as "more data", and all you care is to move/load it, 128/256/512bit SIMD works better for that. Heck, use 64bit floats if like they did for years with 32bit cpus.
            3) The 32bit vs 64bit data bus is a thing of the past. It
            • by Viol8 ( 599362 )

              Whatever. Using your logic were should have stayed with 8 bit CPUs. Or even 4.

              Sorry, you've not convinced me in the slightest.

        • Yeah but at 64 bits Chrome can now chew through all available memory even on > 4GB phones.

          Although I guess that's more of an issue for Firefox than Chrome.

      • Back in the dark ages, the Sun Ultra boxes ran slower if you installed 64 bit Solaris. That was only if you needed to address multiple GB of memory.

      • by gweihir ( 88907 )

        Really? Why should 64 bit be slower? Do you even understand how processors work?

        Excuse me? That is CPU design 101.

      • by Bengie ( 1121981 )
        64bit modes tend to use different alignments among other things. This can result in a lot more padding and thus wasted memory space resulting in less efficient use of cache.
    • That make no sense; what software is slowed down by having more and bigger registers and a significantly updated instruction set?
      • by gweihir ( 88907 )

        That make no sense; what software is slowed down by having more and bigger registers and a significantly updated instruction set?

        Software that uses main memory and caches. The changes in instruction set are not a 64 bit feature, that is an instruction set update that could have done for 32 bit just as well.

        • Software that uses main memory and caches.

          That software has the option of using exactly the same layout of data in memory so there's no reason why it should get slower for these reasons.

          The changes in instruction set are not a 64 bit feature

          On ARM they definitely are. On ARM you can't choose to use 64b registers with the old instruction set, or the new instruction set without 64b registers. If you think you can, please point me to the relevant documentation.

          • by gweihir ( 88907 )

            You seem to be unaware that you can compile 32 bit software for a 64 bit CPU. That does not make it 64 bit software. It just makes it 32 bit software that runs on a 64 bit CPU. Depending on CPU, such software may be slower, faster, or the same speed. Actual 64 bit software is slower unless you need 64 bit operands, for example in numeric calculations. I recommend you read of up the topic.

            • ...you can compile 32 bit software for a 64 bit CPU. That does not make it 64 bit software. It just makes it 32 bit software that runs on a 64 bit CPU.

              That sounds so ambiguous that I have no idea what you mean by that. Are you saying that AArch64 ILP32 compilation doesn't yield 64b code? Because it definitely doesn't yield AArch32 code just running on an AArch64 CPU in compatibility mode.

              Actual 64 bit software is slower

              Oh, nonsense. That may have been the case in the distant past, such as for example with AMD's first Hammer CPUs, but both microarchitectures and compilers have massively fixed their shit since then to remove these regressions.

            • by Anonymous Coward

              You think you've learned something about 32-bit versus 64-bit SPARC or AMD and you think your nugget of knowlege can be deployed here.

              The ARM64 instruction set is COMPLETELY DIFFERENT from the ARM32 instruction set and there are a bunch of changes which increase performance even if you don't need to use 64-bit numbers, including: twice as many registers and removing conditional execution from individual instructions. Conditional execution might have been a good idea at one time -- but now it plays havoc wit

    • Comment removed based on user account deletion
      • by gweihir ( 88907 )

        And then you find I said something about "other optimizations" and hence I am actually right.

    • by Luthair ( 847766 )
      Javascript's numbers are all double precision floating point numbers, so one would assume any math performed in Javascript would probably be faster with a 64-bit binary.
      • by gweihir ( 88907 )

        Floating point has been handled by the FPU in any larger processor for a long long time. "32bit" and "64bit" refers to the ALU, not the FPU.

  • by BAReFO0t ( 6240524 ) on Saturday July 04, 2020 @07:03AM (#60260550)

    Per tab, of course.

    Gotta support ALL the kitchen sinks. Err, sorry, I mean webKitchenSinks! Of course completely incompatible with regular kitchen sinks, even though implemented on top of them.
    Whatever it takes to kill everyone else.

    Now imagine if Chrome was a lifeform.
    Can you imagine what we would call it?
    The Blob Queen -- a horror movie by John Carpenter [imdb.com].

  • How does the move to 64-bit enhance security? What security issues are caused Chrome by remaining 32-bit all this time?

    • One example: ASLR works much better in a 64-bit address space:

      "ASLR is a security feature that causes a program's data locations to be randomly arranged in memory. ... A 64-bit system has a much larger address space than a 32-bit system, making ASLR that much more effective"

      https://www.howtogeek.com/1655... [howtogeek.com]

      • by Luthair ( 847766 )
        In practical terms that isn't important, stumbling into the 'right' memory address in 32-bit is still extraordinarily unlikely . Personally I would guess that the article's author is confused because on Windows Microsoft enabled a number of security features (like ASLR) by default on the 64-bit versions of their OS, leaving them disabled or optional on the 32-bit versions.
  • Lately I'm getting tired of chrome and it's buggy browser. the speed up probably means it will just crash faster/sooner for me.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...