Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software

Speed Test: Comparing Intel C++, GNU C++, and LLVM Clang Compilers 196

Nerval's Lobster writes "Benchmarking is a tricky business: a valid benchmarking tries to remove all extraneous variables in order to get an accurate measurement, a process that's often problematic: sometimes it's nearly impossible to remove all outside influences, and often the process of taking the measurement can skew the results. In deciding to compare three compilers (the Intel C++ compiler, the GNU C++ compiler (g++), and the LLVM clang compiler), developer and editor Jeff Cogswell takes a number of 'real world' factors into account, such as how each compiler deals with templates, and comes to certain conclusions. 'It's interesting that the code built with the g++ compiler performed the best in most cases, although the clang compiler proved to be the fastest in terms of compilation time,' he writes. 'But I wasn't able to test much regarding the parallel processing with clang, since its Cilk Plus extension aren't quite ready, and the Threading Building Blocks team hasn't ported it yet.' Follow his work and see if you agree, and suggest where he can go from here."
This discussion has been archived. No new comments can be posted.

Speed Test: Comparing Intel C++, GNU C++, and LLVM Clang Compilers

Comments Filter:
  • Funny benchmarks (Score:2, Insightful)

    by Anonymous Coward

    The benchmarks in TFA are a little funny. Why is system time so large while user time so small? The only time I've seen this in real applications is when there is major core contention for resources.

    • Re:Funny benchmarks (Score:4, Interesting)

      by Trepidity ( 597 ) <delirium-slashdotNO@SPAMhackish.org> on Monday November 04, 2013 @03:54PM (#45329935)

      This looks like it's testing the compile-time, in which case a large % of time being system time isn't that uncommon. Lots of opening and closing small files, generating temp files, general banging on the filesystem. Can heavily depend on your storage speed: compiling the Linux kernel on an SSD is much faster than on spinning rust.

    • Re:Funny benchmarks (Score:5, Interesting)

      by godrik ( 1287354 ) on Monday November 04, 2013 @04:28PM (#45330317)

      that's normal. There is hyperthreading on that machine, it screws up that kind of measurement. You should always use wall-clock time when dealing with parallel codes. You should also repeat the test multiple times and discard the first few results which the author did not do. It is very standard in parallel programming benchmark. And since the author did not do that, I assume he does not know much about benchmarking. Lots of parallel middleware have high initialization overhead. This tends to be particularly true for intel tools.

      • Although the fact that elapsed time is in minutes whilst the other times are in seconds, might have something to do with it....
    • by robthebloke ( 1308483 ) on Monday November 04, 2013 @04:47PM (#45330515)
      I agree. He's not testing compiled code performance, he's just created a set of tests which will all be memory bandwidth limited. FTA:

      I’m testing these with an array size of two billion.

      That's all I needed to read to ignore him completely. Completely and utterly pointless. If g++ won, it is likely because it utilised stream intrinsics to avoid writing data back to the CPU cache, which would have freed up more cache, and minimised the number of page faults. This will not in anyway test the performance of the CPU code, it will just prove that your 1333Mhz memory is slower than your 3Ghz processor . This is why you don't profile code (wrapped up in a stupid for loop), but profile whole applications instead. From my own tests (measuring the performance of large scale applications using real world data sets), intel > clang > g++ (although the difference between them is shrinking). The author of the article hasn't got a clue what he's doing. FTA:

      Notice the system time is higher than the elapsed time. That’s because we’re dealing with multiple cores.

      No it isn't. It's because your CPU is sat idle whilst it waits for something to do.

      • by Anonymous Coward on Monday November 04, 2013 @05:53PM (#45331043)

        Here's another tipoff that the guy is clueless about benchmarking, talking about a test which does FP math:

        I’m not initializing the arrays, and that’s okay

        Actually, it's not. This is a bad mistake which totally invalidates the data. Many FPUs have variable execution time depending on input data. There is often a large penalty for computations involving denormalized numbers. If uninitialized data arrays happen to be different across different compilers (and they might well be), execution time can vary quite a lot for reasons completely unrelated to compiled code quality.

        It's not limited to FP, either. I remember at least one PowerPC CPU which had variable execution time for integer multiplies -- the multiplier could detect "early out" conditions when one of the operands was a small number, allowing it to shave a cycle or two off the execution time.

        The moral of the story: making sure that input data for benchmarks is always the same is very important, even when it's trivially obvious that the code will execute the exact same instruction count for any data set.

        • by godrik ( 1287354 )

          Let alone the fact that on linux an memory page is allocated the first time (ans so placed in memory) it is touched. So if you do not intialize it, the allocation and placement happens during the traversal, which is probably not things you want to time.

      • by godrik ( 1287354 )

        From my own tests (measuring the performance of large scale applications using real world data sets), intel > clang > g++ (although the difference between them is shrinking).

        I made lots of expeirments in this area as well. And my overall conclusions was that the intel compiler, the pgi compiler and GCC are all good compilers. But their performance vary significantly depending on what you are compiling. For some other applications you would get different results. Or that within your application some set of functions are more efficiently compiled by the intel compiler while other are better compiled by GCC. It is really difficult to say which is "the best" compiler in term of per

  • by Anonymous Coward on Monday November 04, 2013 @03:27PM (#45329569)
    Which one produced the fastest code?
    • Re: (Score:3, Insightful)

      by 0123456 ( 636235 )

      Which one produced the fastest code?

      My current project takes two hours to compile from scratch, and uses around 20% CPU when it runs. So yes, compile time can be more important than how fast the code runs.

      • by ledow ( 319597 ) on Monday November 04, 2013 @03:37PM (#45329685) Homepage

        But over the lifetime of any average, runtime should outweigh compile time by orders of magnitude.

        Otherwise, honestly, why bother to write the program efficiently at all?

        And if you want to decrease compile times, it's easy - throw more machines and more power at the job. If you want to decrease runtime, then ALL of your users have to do that.

        Honestly, if your compile times are that much, and that much of a burden, you need to upgrade, and you also need to modularise your code more. The fact is that most of that compile time isn't actually needed for 90% of compiles unless your code is very crap.

        • by rfolkker ( 443051 ) on Monday November 04, 2013 @04:06PM (#45330089)

          I have worked on projects that have taken upwards of 8 hours for a full compile. There is a lot of validity behind the business impact of different compilers.

          The current mentality of throw more horse power at a problem is not always the practical, or the logical conclusion. If you can improve your overall compile time, it can improve your productive time.

          From a Build Engineering perspective, analyzing why it takes time for a project to compile is one of the most important metrics.

          Not only do I monitor how long a project takes to compile, but I also keep an active average, and try to maintain highs and lows to identify compile spikes.

          We monitor processor(s), disk access speeds, memory loads, build warnings, change size, concurred builds, etc.

          We look at all possible solutions. With the current build tools we have, we can either provision another build system for the queue, or if necessary increase memory, or disk space, or faster drives, more processors, or even upgraded software. We have gone as far as home-grown fixes to get around issues until better solutions become available.

          All of this needs to be accounted for, so, not only is compile time relevant, but what is CAUSING compile times is relevant.

        • by 0123456 ( 636235 )

          Honestly, if your compile times are that much, and that much of a burden, you need to upgrade, and you also need to modularise your code more. The fact is that most of that compile time isn't actually needed for 90% of compiles unless your code is very crap.

          Hint: I said 'two hours to compile from scratch'. You can't avoid compiling all your source if you just did a clean checkout from SVN into an empty source tree; as you would, for example, before building a release or release candidate.

        • by adisakp ( 705706 )
          Faster compile times make for faster iteration... which lets you test global changes for examples - which optimizations actually work - more easily. Not to mention that having better iteration on a program usually produces a superior product.

          Also, as a developer, faster compile times make my life a little less frustrating so I'll be less likely to pull out all my hair while waiting on the computer.
          • I have to say, if we ignore Firefox/Linux/other ridiculously huge codes, the people having hours of compile time must be doing something wrong. The 30 000 line Fortran code I'm working on now takes

            $ time make clean optim_ifort
            ...
            make clean optim_ifort 50.16s user 0.86s system 98% cpu 51.731 total

            in serial, and it's roughly 3x faster for the parallell cmake build. This is with -O3 and inter-procedural optimizations turned on, generating AVX-tuned code. If I have only edited a file or two, the compila
            • 30,000 lines of code is a tiny project. I have codebases I wrote myself that are larger. Anything developed by a team is likely to be at least an order of magnitude larger. You're also comparing Fortran to C++, so you get a much faster compile because Fortran doesn't encourage large compile-time code generation in the way C++ templates do, which make parsing very slow, and makes alias analysis trivial, which makes a lot of optimisations easier.
            • by adisakp ( 705706 )
              30K lines is pretty small. It's smaller than a single library in our code base. For example, both our memory system and our network layers are significantly larger than this and they are just support libraries.
        • An excessively long build time can inflate development costs if the delay in testing new code becomes prohibitively long. A large codebase that takes 4 hours to build on a slow compiler will force developers to frequently wait over night for test results to come back. If a different compiler can build that code 4x faster you have many more opportunities to observe test results during a work day. Upgrading the build system isn't always an option when you have to support legacy platforms with inherently slow

          • by AmiMoJo ( 196126 ) *

            How often is the compiler the bottleneck? It seems more likely that optimizing the build process and hardware (particularly by adding more RAM and an SSD) would have a much, much greater effect than switching compiler and at no expense to your run-time performance.

        • by smash ( 1351 )
          However, faster compile time means faster development and debugging.
      • The time spent on running code vs compiling code, for me, is like 10000:1, to be optimistic. Compilation time is pretty irrelevant for me and I daresay most users.

      • by TheGavster ( 774657 ) on Monday November 04, 2013 @04:46PM (#45330509) Homepage

        While any user-facing application is going to spend most of its time waiting for the user to do something, the latency to finish that task is still something the user will want to see optimized. Further, if a long-running task tops out at 20% CPU, apparently optimization was weighted too much towards CPU and you need to look into optimizing your IO or memory usage.

        • sigh...

          His 20% is 1 of 6 cores (16.666%) plus a little bit.

          Doesnt seem like anyone here even has a cursory understanding...
      • You're IO bound. Get a real disk subsystem.

      • Yes, because whenever I'm running an application that eats 80% of my CPU time doing something trivial I think to myself" "It's no big deal really. I bet it compiled like a bat out of hell!"
      • by VortexCortex ( 1117377 ) <VortexCortex&project-retrograde,com> on Monday November 04, 2013 @09:23PM (#45332543)

        Which one produced the fastest code?

        My current project takes two hours to compile from scratch, and uses around 20% CPU when it runs. So yes, compile time can be more important than how fast the code runs.

        I had a C++ project like that once... It was a tightly coupled scripting language that could be compiled down to the host language if parts needed to be sped up. I noticed that I was mostly avoiding C++ features since they didn't exist ( eg: using multiple inheritance with non-pure virtual base classes -- Which the scripting lang allowed by allowing variables to be virtual ) and implementing them in C instead. So, I ditched C++ and coded to C99 instead. When I got all the C++ out of the codebase (thus making it compilable in either) the compile time dropped from an hour and a half in C++ to 15 minutes in C. Since I absolutely must have scripting and the VM lang optionally allows GC transparently across C or script (by replacing malloc and friends), and it has more flexible OOP (entities can change "state" and thus remap method behaviors (function pointers) making large improvements over jump-tables (switch statements) for my typically highly stateful code: I avoid C++ like the plague.

        In fact, since the scripting language can translate itself into C, I don't touch C much either for my projects unless I'm extending the language itself. Over the years I've ported the "compiled" output to support Java and Perl, and Javascript (and am working on targeting ASM.js). It's grueling work just for my games and OS hobby projects, but I really can't bring myself to use a compilable language that doesn't double as its own scripting language -- That's asinine, especially if it compiles an order of magnitude slower.

        Don't get me wrong, I get the utility of a general purpose OOP language built around the most general purpose use cases possible; However when you design something for everyone, you've actually designed it for no one at all. I'll take a language with full language features applicable to its templating (code generation), like my scripting language (or Lisp) over C++ or Java any day. (Note: Rhino or Lua JIT + Java is a close contender as far as usability goes, but nothing beats native compiled code for my applications' use case.)

        WRT to the "insightful" commenter above: Deadlines are far more important to code being able to run than the distributed minute performance gains on end user systems which are influenced by moore's law. Release date is far more significant: The code has 0% usability if I can't produce it in time. Unfortunately, some project depend on emergent behaviors and thus require fast revisions to tweak (this goes doubly for me, hence the scripting component requirement).

      • by jandrese ( 485 )

        My current project takes two hours to compile from scratch,

        Ah, another satisifed Boost user I see.

    • by ShanghaiBill ( 739463 ) on Monday November 04, 2013 @03:46PM (#45329809)

      Which one produced the fastest code?

      It doesn't matter. It may matter which one compiles your code faster. Depending on your use of things like templates, classes, etc. that may be a different compiler than the best for the benchmarks. But even that is unlikely to matter much. I doubt if their is much more than a few percentage difference. More important are issues like standard compliance, good warning messages, tool-chain/IDE integration, etc.

  • by Bill, Shooter of Bul ( 629286 ) on Monday November 04, 2013 @03:29PM (#45329597) Journal

    What on earth does compiler benchmarking have to do with the BI section of slashdot?

    Furthermore, why on earth are you idiots creating a blurb on the main screen that just links to a different slashdot article? Its such terrible self promotion. Just freaking write the main article as the main article. No need to make it seem as if the Buisness Intellegence section is actually worth reading, its not.

  • Measuring pebbles (Score:5, Insightful)

    by T.E.D. ( 34228 ) on Monday November 04, 2013 @03:43PM (#45329775)

    Interesting info, but I have a couple of issues:

    First off, why wasn't Microsoft's C++ compiler included in this? That's the one we use at work, so that's the one I'd really like compared to all those others. Are we the only ones still using it or something?

    More importantly, why on earth was compilation speed the only thing compared? I mean, I suppose its nice for g++ users to know that their 10 minute compiles would have been 2 minutes longer if they used the Intel compiler, but Intel users might not really care if they believe their resulting code is going to run faster. Speed of compilation of optimized code is a particularly useless metric, because different compilers have different definitions of "unoptimized", so its guaranteed you aren't comparing apples to apples.

    I suppose compilation speed is a nice metric to brag about between compiler writers. But for compiler users, the most important things are roughly these, in order: Toolchain support, language feature support (eg: C++2012/14 features), clarity of error/warning messages, speed of generated code (optimization), and lastly speed of compilation. I'm not really sure why you took it upon yourself to measure the least important factor, and only that one.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Your pre-elementary reading and comprehension skills leave much to be desired.

      It’s interesting that the code built with the g++ compiler performed the best in most cases, although the clang compiler proved to be the fastest in terms of compilation time.

      Just in case you didn't get that: They did benchmark the resulting binaries, and g++ made the best ones.

      • Re:Measuring pebbles (Score:5, Interesting)

        by T.E.D. ( 34228 ) on Monday November 04, 2013 @04:24PM (#45330271)

        OK. Much abashed, I went back through the article.

        It turns out that there are numbers for an actual code benchmark. Its found about 2/3rds of the way through the report, in the third graph (untitled), after the balance of the text had already been devoted to compilation speed comparisons. Also, it only listed 2 of the 3 compilers, half the data was for unoptimized code (and thus useless), and it was hidden behind a sign that said "Beware of leopard".

        OK, perhaps I made that last part up.

        For the curious, the difference in at least that one table was never more than 5%. In my mind, hardly a differentiator unless you are doing heavy number crunching or stock trading programs.

        Perhaps the remaining 1/3rd is all about more important things? I've lost interest. You're right. I'm weak.

    • Fast compilation can have it's advantages. It's one of the reasons some developers like working in scripting languages (PHP, Ruby, etc...). If simple mistakes in coding don't cost you 20 minutes of compile time, it can speed up development a lot. I use .Net, which i think has a nice balance between the two. Reasonable compile times, while still having compile time type checking and other advantages of a compiled language.
    • First off, why wasn't Microsoft's C++ compiler included in this?

      Maybe because most people want Microsoft out of the picture and conveniently choose to ignore it rather than lend credence to it?

  • Crappy benchmark (Score:5, Informative)

    by raxx7 ( 205260 ) on Monday November 04, 2013 @03:53PM (#45329917) Homepage

    The code in the benchmark runs a parallel for over a 10 billion element array but in steps of 100 elements.
    It's going to be limited by the creation and destruction of threads.

    Also, by not initializing the input array, the floating point arithmetic is vulnerable to eventual denormal values.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      By not initializing the input array, the code's behaviour is undefined. Which makes this a test of what these compilers do with complete garbage source that isn't even a valid C++ program.

  • by godrik ( 1287354 ) on Monday November 04, 2013 @04:39PM (#45330431)

    I am a scholar and study parallel computing. These benchmarks are pretty much pointless. You can not make any conclusions out of these results. Here the author take the time whole time of the execution for the creation of the process to its destruction. That means that are included lots of overhead which would be included in startup time in a real application.

    There is also apparently no thread pinning to computational cores. This is known to make a HUGE difference.

    Then the authors compared cilk result. cilk is known to be slow for simple codes that do not require workstealing and have complex dependencies. For the record, I know they are also comparing TBB. But TBB is implemented on top of the cilk engine in the intel compiler (I don't know about gcc).

    In these results hyperthreading is enabled. The proper use of hyperthreading is complicated. There are some problems where it helps, other where it harms, and I would not be surprise that this behavior be compiler dependent.

    Finally, it is almost impossible to compare compilers. On different platforms, with the same compilers you will get different results. Some functions are better compiled by one compiler and some functions are better compiled by the other compiler. This has been reported over and over and over again.

    If you care about performance, you should not rely on what your compiler is doing in your back. You need to know what it is doing. Depending on memory alignment (and what the compiler knows about it), depending how the vectorization happen, depending on potential memory aliasing you will get different results.

    If you care about performance, you need to benchmark and you need to optimize and you need to know what the compiler does.

  • by pauljlucas ( 529435 ) on Monday November 04, 2013 @05:19PM (#45330813) Homepage Journal
    This information is perhaps 2 years out of date, but back for one of my projects, when we switched from g++ to Intel C++, our software got about twice as fast with no other changes. It got even faster when we took advantage of SSE3 instructions.
  • by mjwalshe ( 1680392 ) on Monday November 04, 2013 @06:11PM (#45331197)
    It is speed that is important which is why a lot of HPC people still prefer the intel compilers.
  • Why in the hell are you testing Clang with either Cilk or OpenMP when neither have moved into mainline trunk of LLVM/Clang? This test is as worthless as Phoronix's test suite on benchmarking apps that require OpenMP and they note that Clang takes it in the shorts because it presently doesn't have OpenMP implemented. Complete waste of time.
  • In the times we live in - and the knowledge Ed S. has given us - do you really still trust a black-box compiler from a huge US corporation with intimate government ties?

One good reason why computers can do more work than people is that they never have to stop and answer the phone.

Working...