Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Microsoft Hardware Apple

First Apple Silicon Benchmarks Destroy Surface Pro X (thurrott.com) 218

As expected, developers with early access to Apple silicon-based transition kits have leaked some early benchmarks scores. And it's bad news for Surface Pro X and Windows 10 on ARM fans. Thurrott reports: According to multiple Geekbench scores, the Apple Developer Transition Kit -- a Mac Mini-like device with an Apple A12Z system-on-a-chip (SoC), 16 GB of RAM, and 512 GB of SSD storage -- delivers an average single-core score of 811 and an average multi-core score of 2871. Those scores represent the performance of the device running emulated x86/64 code under macOS Big Sur's Rosetta 2 emulator.

Compared to modern PCs with native Intel-type chipsets, that's not all that impressive, but that's to be expected since it's emulated. But compared to Microsoft's Surface Pro X, which has the fastest available Qualcomm-based ARM chipset and can run Geekbench natively -- not emulated -- it's amazing: Surface Pro X only averages 764 on the single-core test and 2983 in multi-core. Right. The emulated performance of the Apple silicon is as good or better than the native performance of the SQ-1-based Surface Pro X. This suggests that the performance of native code on Apple silicon will be quite impressive, and will leave Surface Pro X and WOA in the dust.

This discussion has been archived. No new comments can be posted.

First Apple Silicon Benchmarks Destroy Surface Pro X

Comments Filter:
  • by jeromef ( 2726837 ) on Tuesday June 30, 2020 @02:21AM (#60245798)
    Careful with the comment about emulated code being "obviously" slower than native code... "JIT code generally offers far better performance than interpreters. In addition, it can in some cases offer better performance than static compilation, as many optimizations are only feasible at run-time" https://en.wikipedia.org/wiki/... [wikipedia.org]
    • by vlad30 ( 44644 )
      very true well written code was present in the first pc emulator for the mac I recall the delight when someone would say "Oh a mac but it can't run windows" and I would start VirtualPC and watch the jaw drop when it was clearly faster than the PC they were using shame Microsoft bought it then hobbled it
      • by gl4ss ( 559668 ) on Tuesday June 30, 2020 @03:36AM (#60245930) Homepage Journal

        that sounds like a load of bs, on which powerpc it was running vs. what they were running it on?
        because in no point in history was it ever faster to run x86 code on ppc than on x86.

        as for the slashdot posting of this article.. win10 on arm fans? what the f? there's no fans for windows 10 on arm.

        • that sounds like a load of bs, on which powerpc it was running vs. what they were running it on?
          because in no point in history was it ever faster to run x86 code on ppc than on x86.

          as for the slashdot posting of this article.. win10 on arm fans? what the f? there's no fans for windows 10 on arm.

          Maybe that happened already after Apple switched to Intel lol.

          Microsoft's attempt to resurrect Windows RT doesn't seem to be going anywhere. I get that they want to have an escape hatch but so far Quallcomm's garbage isn't rally providing anything that Intel or AMD couldn't come up with in another generation, especially Intel once they finally get to the 7nm process.

          And this story... the numbers seem to be saying the opposite of what the author claims. It's literally slower than the Surface X. Maybe if you

          • by Rhipf ( 525263 )

            Based on the summary, Apple is faster for single core operations (Apple 811 vs Microsoft 764) and Microsoft is faster for multi-core operations (Apple 2871 vs Microsoft 2983). So when they state:

            The emulated performance of the Apple silicon is as good or better than the native performance of the SQ-1-based Surface Pro X

            they are technically correct. For single core operations Apple is better than Microsoft and for multi-core operations Apple is "as good" as Microsoft depending on how you define "as good". I assume that they consider 5ish% better (single core) and 3ish% "as good" (multi-core).

        • Maybe it was one of the PowerPC Macs that had the x86 card? I forget which model, but I had one at a job back in the 1990s.

        • by vlad30 ( 44644 )
          wasn't on PowerPC it was a Mac IIfx 68030 machine most people had 8086 pc's if they were lucky 80286
      • by cide1 ( 126814 )

        VirtualPC is a virtual machine, not an emulator. https://stackoverflow.com/ques... [stackoverflow.com]

        • by Guspaz ( 556486 )

          Connectix VirtualPC was originally a x86-on-PowerPC emulator for the Mac. It was later released on Windows as a virtual machine, and then bought out by Microsoft.

      • Wow, my memory is quite different. VPC *ran*, but not very well. It was fine for a few lightweight tasks, but it was never snappy. Even on a well spec'd (for the day) Mac with multiple PowerPC CPUs. Emulation only became usable with Intel based Mac's and one of the commercial hypervisors.
    • by Namarrgon ( 105036 ) on Tuesday June 30, 2020 @02:37AM (#60245828) Homepage

      Only because it most often is significantly slower than native code (though still far better than interpreted), but as always it depends on the code itself - there are fairly specific cases where it's possible for JIT to exceed a static compile on certain hardware.

      For a rough comparison, this [geekbench.com] is Geekbench running on an iPad Pro's apparently-similar A12Z CPU under iOS, giving 1115 for single-threaded and 4670 for multi-threaded. There are hardware and OS differences of course (the iPad is 2490 MHz vs 2400 MHz) but I'd expect it to be in the ballpark of native AppleSilicon performance under macOS.

      • Re: (Score:2, Insightful)

        by AmiMoJo ( 196126 )

        It's possible Apple tuned Rosetta for Geekbench too. Since they know that's the benchmark everyone will look at they could easily have tuned their translator to recognize the inner loops it uses and replace them with hand optimized ones.

        • If true, that's probably much bigger news, that Apple is actually considering itself in same market as PCs where generic objective performance metrics structure the market. Until I see evidence of that, I'd rather not go down that rabbit hole though.

          • by AmiMoJo ( 196126 )

            Their demonstrations have all be highly suspect. Every app and game selected to be GPU bound so that the CPU being weaker than current x86 ones isn't apparent.

        • It's possible Apple tuned Rosetta for Geekbench too. Since they know that's the benchmark everyone will look at they could easily have tuned their translator to recognize the inner loops it uses and replace them with hand optimized ones.

          That's a nice theory, but everyone receiving the development kit has to sign an NDA not to post any benchmark results. Anyway, Apple has full access to LLVM in the OS. So loops go through the full LLVM optimisation process anyway.

        • by lgw ( 121541 )

          It's possible Apple tuned Rosetta for Geekbench too. Since they know that's the benchmark everyone will look at they could easily have tuned their translator to recognize the inner loops it uses and replace them with hand optimized ones.

          When Sun got busted doing that back during the Megahertz Wars, it was pretty much the end of their CPU and compiler teams. Oh, the teams went on for years, but the top talent wandered off so as not to be associated with the cheating. It was a long slide down to Oracle after that.

          • by AmiMoJo ( 196126 )

            Worked out pretty well for Intel though. Remember when their compilers did a

            if (CPU == GenuineIntel) RunFastCode() else RunSlowCode();

        • It's possible Apple tuned Rosetta for Geekbench too. Since they know that's the benchmark everyone will look at they could easily have tuned their translator to recognize the inner loops it uses and replace them with hand optimized ones.

          It's also possible that these are completely useless figures unless power consumption is published, too.

          Surface Pro X might be slower because it consumes a tenth of the power of the chips used in this "benchmark".

    • Only in very rare instances. 99% of the time the compilation time more than makes up the difference.

      • Depends what kind of software we are talking about.
        Serverside, running for weeks, JIT compled is always close to optimum.

        An App that gets loaded and used 20 seconds, like a weather app, not so much - but who would care?

        And then there are the compromises "semi JIT" on first load and cashing the binary ...

        • by lgw ( 121541 )

          Java always seems slow though. From the once popular clinet-side apps that gradually died as people got tired of how slow everything was, to the server side where "eh, we'll just throw a few more servers at it, it's cheaper than more dev time". I've been heard how great JIT is for 20 years, but Java is always slow.

        • I used to cash my binaries, but I was getting killed on the fees. Direct deposit has been much more convenient.
    • by Joe2020 ( 6760092 ) on Tuesday June 30, 2020 @04:27AM (#60246008)

      The notion from it being slower comes from the thought that what one can do in emulation, one could also do natively. Only one doesn't see self-modifying code anymore these days. Instruction caches stop working, code now often gets moved around in memory and nobody apart from brainiacs enjoys debugging self-modifying code. So the art of self-modifying code got lost.

      What compilers do now is to implement multiple versions of basically the same code, but for different conditions. But this helps emulators just as much as it helps native code. Thus are there still ways for emulation to catch up to native code.

      The vast majority of code has not been compiled for a specific CPU, but only for a generic type, which represents many CPUs. Thus can emulators make use of features specific to the CPU they're running on, which a compiler didn't make use of (i.e. vector units or just instructions that aren't available on all CPUs).

      What further helps is that x86 CISC designs are just RISC designs in disguise these days. So even when they offer more complex instructions do these come with high costs so compilers generally avoid using them. You'll see this also throughout many of the coding guidelines from AMD and Intel, where they frequently point out how one combination of instructions should be avoided in favour of others, which is usually caused by changes in the CPU design to suit the needs for more speed. So have instruction timings changed and instructions, which exploit the underlying RISC design, run faster than others. Thus do compilers now produce code, which is more RISC-like than to make full use of the x86 CISC complexity, simply because using the full complexity is no longer as beneficial as it used to be. This, too, makes the job easier for emulators.

    • by AmiMoJo ( 196126 )

      This is true for older CPUs for not for modern x86. A modern x86 CPU does something very similar to JIT on the fly, but tailored to that specific CPU and the resources it has available (number of ALUs, FPUs, execution units, cache misses, memory access cycles, all of which can change dynamically in a power/head constrained laptop).

      If it were somehow possible then AMD and Intel would quickly release microcode updates to take advantage of whatever improvement had been found.

    • Careful with the comment about emulated code being "obviously" slower than native code...

      "JIT code generally offers far better performance than interpreters. In addition, it can in some cases offer better performance than static compilation, as many optimizations are only feasible at run-time" https://en.wikipedia.org/wiki/... [wikipedia.org]

      If emulated code was faster, then X86 could emulate itself to run faster. Isn't it?

    • "as many optimisations are only feasible at run-time"

      but none of them are actually implemernted. Those quotes justifying JIT compilers miss one important factor: they do not do those imaginary optimisations. Nobody bothers as the work to implement and test and support such things are huge, and teams are too busy churning away adding new features.

      JIT teams are too busy to even support optimisations for ordinary features not even dependant on some architectures. eg here's an optimisation [slashdot.org] that Microsoft finall

    • by ceoyoyo ( 59147 )

      Neither JIT nor interpreted is emulation.

  • So Silicon can't run Geekbench natively while Surface's ARM can? Sounds like a win for Surface.
    • Re: (Score:3, Informative)

      by gnasher719 ( 869701 )

      So Silicon can't run Geekbench natively while Surface's ARM can? Sounds like a win for Surface.

      Are you stupid, or are you intentionally acting stupid? They were running Geekbench for MacOS. Compiled to Intel code. Geekbench has a version for iOS, compiled to ARM code, and a version for Surface Pro, compiled to ARM code, and when Macs with ARM chips are officially released, they will have a version for MacOS, compiled to ARM code.

      • Yes, dumb-dumb but they don't have a native Silicon (do you understand I'm talking about the product?) ARM Geekbench now and they won't have tons of other software at release time. Emulation for everything they're missing will be a bitch in terms of speed / battery / RAM.
  • So it's performing worse than a Core i3?

    • So it's performing worse than a Core i3?

      Yes. A computer built for developers to test their software, containing a two year old chip, not taking advantage of the available power and cooling, and running code in an emulator, is performing worse than a Core i3.

      Now take a new chip, run it at the clock speed that a laptop or desktop allows, and run software compiled for ARM, and it leaves a Core i3 in the dust. (That's what you might get in the cheapest future Macs). Then double or quadruple the number of cores, which is no problem for a processor

  • It stands to reason that a Mac Mini like device would beat a Surface like device, just purely based on thermal limits alone to say nothing of the fact that I think we all expect a high end not yet released device is faster than a high end 9 month old device.

    But really I'm more interested in the emulation topic. Let's get native ARM benchmarking software on the Mac and get non-native on the PC and compare the four of them. I have high hopes for Rosetta 2's performance over Microsoft's attempts at emulation.

    • There was an article some time back, perhaps a year ago maybe, about one of the benchmark companies getting a leaked report from what claimed to be OSX on Arm, with freakish performance. But there was pretty heavy skepticism because it was OSX on Arm. Now that we know such a thing exists, and has likely existed for some time now behind closed doors, it seems like it might not be such a far fetched thing after all.

      • by kriston ( 7886 )

        Several years back I poked around the innards of a jailbreaked iPod Touch and it sure looks a lot like OS X under the hood.

  • by Chris Katko ( 2923353 ) on Tuesday June 30, 2020 @03:14AM (#60245886)

    newer product faster than older product.

    News at 11.

  • by nfkso_78 ( 7005794 ) on Tuesday June 30, 2020 @03:18AM (#60245898)
    Well the A12Z is pretty much the same as the A12X used on the 2018 iPad Pro, the difference being the Z has one extra GPU unlocked. So to sum it up: A Mac with slightly jazzed up iPad processor from 2 years ago, running benchmark IN EMULATION on a full fledged desktop OS, is faster than: Surface pro x with BESPOKE, 1yr old ARM processor (yea I know itâ(TM)s not THAT bespoke, but still) , running benchmark in NATIVE architecture on a full fledged desktop OS. Tbh itâ(TM)s more of a reflection on the failure of MS than the achievement of Apple.
  • by aliquis ( 678370 ) on Tuesday June 30, 2020 @03:36AM (#60245926)

    This suggests that the performance of native code on Apple silicon will be quite impressive

    No.

    First of all Apples ARM chips are faster than Qualcomms. So to judge the emulated performance you should of course compare this to what the score is when running the program natively on the same chip not vs another chip.

    Secondly the text itself say it's not impressive against an x86/AMD64 chip. So.. No.

    The only "impressive" bit is that unnative code run as well on the Apple chip as the competitors chip but what's impressive there is how much better Apple's chips are.

    https://browser.geekbench.com/ios_devices/ipad-pro-12-9-inch-4th-generation [geekbench.com]
    A12Z in Ipad Pro 12.9" 1118 single-core, 4626 multi-core.
    So the translated instructions / emulated environment have 72.5% of the single-core and 62% of the multi-core performance of that.

    And of course that A12Z in the iPad Pro beats the Surface Pro by 46.3% in single-core and 55% in multi-core performance.

    Top i9 10900K single-core 1393 is 71.8% above this single-core score and the multi-core 11544 is 302% higher.

    • Top i9 10900K single-core 1393 is 71.8% above this single-core score and the multi-core 11544 is 302% higher.

      Add 50% performance for not running emulated. Add a few percent performance for running a late 2020 processor. Add 40 percent performance for running at 3.5 GHz. Then multiply the multi-core result by 2 or 4, because Apple won't sell anything but the bottom line Macs with an iPad processor; everything else will have twice or four times the number of cores. And then compare power consumption, battery life, and processor cost.

      • "it's bad news single-core score of 811 and an average multi-core score of 2871 running emulated x86/64 code -- it's amazing: Surface Pro X only averages 764 on the single-core test and 2983 in multi-core. emulated performance of the Apple silicon is as good or better than the native performance of Surface Pro X. performance of native code on Apple silicon will be quite impressive, and will leave Surface Pro X and WOA in the dust."

        So the two systems are about neck-and-neck, and the claim evoking all of

  • by Generic User Account ( 6782004 ) on Tuesday June 30, 2020 @03:50AM (#60245956)
    They hope nobody remembers the previous CPU architecture transitions. They always promise glorious performance. Nobody thinks these transitions went great.
    • Are you retarded? Some of us were actually there. Other than some developers dragging ass it went far smoother than any attempts by PC clone makers attempting to make the PC suck less.

      680x0 -> PowerPC went phenomenally well. Even on the cheapest PowerPC Macs, performance running emulates 68k software was pretty damn impressive.

      PowerPC -> Intel went quite well too even though the early Core Duo wasn't all that impressive compared to the high end G5 machines still around at the time.

      The transitions w

      • by Megane ( 129182 )

        680x0 -> PowerPC went phenomenally well

        It was pretty decent considering that at first they only re-compiled the minimum amount of code for PPC, so at first you were still running mostly 68K code. The emulator itself was pretty impressive considering that it ran in the same real address space and was almost transparent.

        My only complaint is they didn't keep classic mode and Rosetta around in subsequent OS revisions.

        I would have been happier if they had kept Classic for one more version. The last PPC support was in 10.5, but the last Classic was in 10.4. Rosetta could be manually installed in 10.6 and it kept me from ever using 10.7 or 10.8.

    • by k2r ( 255754 )

      I think they went fine and the other developers in the companies I used to work for back then thought so, too.
      Of course, not transitioning would have meant less problems then, but we would still be stuck to 68K or PPC then.
      It's time to move, again, that's going to be interesting!

    • The transition was fine. Things were substantially faster on my Intel macs than my G5 mac. I ran some stuff under emulation for a while, but everything ended up native eventually. For most of my stuff, it didn't even take that long. The only things that I missed were some games that were PPC only and never got recompiled.

      I don't doubt that a segment of the population had a rough transition, but most of us were just fine. The biggest apps all came across fairly quickly, and Apple's own apps obviously were th

  • Brainwashed? (Score:5, Insightful)

    by FumarMata ( 1340847 ) on Tuesday June 30, 2020 @04:23AM (#60246002)

    Article title: "First Apple Silicon Benchmarks Destroy Surface Pro X"

    In the same article:

    • Apple A12Z multicore performance: 2871
    • Surface Pro X multicore performance: 2983

    Spechless...

    • Re: (Score:2, Insightful)

      by chr1sb ( 642707 )
      Both benchmarks were running X86 code. Their performance was very close. However, the A12Z (an ARM processor) was emulating x86; the Surface Pro X was running x86 natively.The point is that the once non-emulated ARM64 code is available for Geekbench (it sort of is with iOS, but not for Mac OS), it can be expected to be noticeably (significantly?) faster.

      Importantly, in this developer-only version, performance of emulated X86 on Apple ARM hardware is already very respectable. It it reasonable to expect it

  • by otuz ( 85014 ) on Tuesday June 30, 2020 @04:33AM (#60246022) Homepage

    It's silly to post comparisons of early emulated code, since even the emulator will improve over time running on the same hardware. However, there are native scores on iOS and iPadOS:

    The A12Z on the iPad Pro 11" gets 1119/4699: https://browser.geekbench.com/... [geekbench.com]
    It's Apple's previous generation chip with two extra cores.

    The Intel Core i7-1065G7 on Microsoft Surface Laptop 3 is in the same ballpark with 1233/4751: https://browser.geekbench.com/... [geekbench.com]

    In single-core performance, the 6-core A13 Bionic on Apple's $500 iPhose SE beats both 1328, but obviously loses in 3043 multi-core, since it's a small telephone: https://browser.geekbench.com/... [geekbench.com]
    It's Apple's current generation chip though.

    What the production Macs will be using are next generation from that, probably A14 something, and will be engineered without such tight thermal limitations.

  • Is this Slashdot or WrestleMania? Since when did "destroy" mean to have a 20% advantage in comparative benchmarking?
  • Emulation isn't quite the correct term, I would think. If I understand correctly, this version of Rosetta does an Intel binary to Arm binary compilation. So, maybe not quite as good as source-to-binary, but quite a bit better than typical emulation.
    • Comment removed based on user account deletion
      • Rosetta can do both. It can do ahead-of-time binary to binary recompiling. It can also do JIT for dynamic code (ie emulating a web browsers JavaScript JIT compiler.) It cannot do some vector ops instructions or OS-level calls, though, so it is not a perfect emulator.

    • by k2r ( 255754 )

      Paging the noob in this other thread who insisted that Rosetta can't compile x86_64 to ARM because A64 is not proper source code...

  • by Ed Tice ( 3732157 ) on Tuesday June 30, 2020 @07:15AM (#60246220)
    It seems no matter how fast the hardware is, apps always seem to expand to use every cycle that you have and more. It's amazing how inefficient things are.
  • IMHO, the Surface line didn't come into existence because Microsoft wanted to dominate the performance-oriented computing business. I think they came into being because they were frustrated that their touch interface and transformation to an Apple-style consumer focus were being thwarted/ignored by PC hardware makers who were still sticking to the low-margin beige box model.

    So Microsoft came out with their own reference platforms and opened their own stores (at Mall of America in Minnesota, the MS store is

  • The first is this - Apple wins in single core and loses in multicore. What are the other cores doing when it's emulating the single core test? Are they doing nothing, which would make it a valid result, or are they performing emulation tasks, like compiling the code being run on that one core? Or, since emulation is happening anyway, could it be using multiple cores to create a virtual single core? If that were the case, would it not present as higher single core scores and lower multicore scores?

    The

  • There isn't much difference, could easily be a thermal limit.

  • Will they block other operating systems from running on it? Is it going to be a computer or is it going to be a console like the phones?

    If it is a console, then who cares?

  • A highly regarded benchmark. Synthetic benchmarks fell out of favor years ago because they aren't an accurate gauge of real world performance and they can also be optimized against and fooled. Its actually a pretty big problem with Geekbench specifically as many vendors have been found cheating it over the years.

    This is just pre-release product hype and nothing more. Synth benches don't matter and will matter even less once the real product is out and people can accurately judge its performance.

"It's like deja vu all over again." -- Yogi Berra

Working...