Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software

Speed Test: Comparing Intel C++, GNU C++, and LLVM Clang Compilers 196

Nerval's Lobster writes "Benchmarking is a tricky business: a valid benchmarking tries to remove all extraneous variables in order to get an accurate measurement, a process that's often problematic: sometimes it's nearly impossible to remove all outside influences, and often the process of taking the measurement can skew the results. In deciding to compare three compilers (the Intel C++ compiler, the GNU C++ compiler (g++), and the LLVM clang compiler), developer and editor Jeff Cogswell takes a number of 'real world' factors into account, such as how each compiler deals with templates, and comes to certain conclusions. 'It's interesting that the code built with the g++ compiler performed the best in most cases, although the clang compiler proved to be the fastest in terms of compilation time,' he writes. 'But I wasn't able to test much regarding the parallel processing with clang, since its Cilk Plus extension aren't quite ready, and the Threading Building Blocks team hasn't ported it yet.' Follow his work and see if you agree, and suggest where he can go from here."
This discussion has been archived. No new comments can be posted.

Speed Test: Comparing Intel C++, GNU C++, and LLVM Clang Compilers

Comments Filter:
  • by Bill, Shooter of Bul ( 629286 ) on Monday November 04, 2013 @04:29PM (#45329597) Journal

    What on earth does compiler benchmarking have to do with the BI section of slashdot?

    Furthermore, why on earth are you idiots creating a blurb on the main screen that just links to a different slashdot article? Its such terrible self promotion. Just freaking write the main article as the main article. No need to make it seem as if the Buisness Intellegence section is actually worth reading, its not.

  • Re:Funny benchmarks (Score:4, Interesting)

    by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Monday November 04, 2013 @04:54PM (#45329935)

    This looks like it's testing the compile-time, in which case a large % of time being system time isn't that uncommon. Lots of opening and closing small files, generating temp files, general banging on the filesystem. Can heavily depend on your storage speed: compiling the Linux kernel on an SSD is much faster than on spinning rust.

  • Re:Measuring pebbles (Score:5, Interesting)

    by T.E.D. ( 34228 ) on Monday November 04, 2013 @05:24PM (#45330271)

    OK. Much abashed, I went back through the article.

    It turns out that there are numbers for an actual code benchmark. Its found about 2/3rds of the way through the report, in the third graph (untitled), after the balance of the text had already been devoted to compilation speed comparisons. Also, it only listed 2 of the 3 compilers, half the data was for unoptimized code (and thus useless), and it was hidden behind a sign that said "Beware of leopard".

    OK, perhaps I made that last part up.

    For the curious, the difference in at least that one table was never more than 5%. In my mind, hardly a differentiator unless you are doing heavy number crunching or stock trading programs.

    Perhaps the remaining 1/3rd is all about more important things? I've lost interest. You're right. I'm weak.

  • Re:Funny benchmarks (Score:5, Interesting)

    by godrik ( 1287354 ) on Monday November 04, 2013 @05:28PM (#45330317)

    that's normal. There is hyperthreading on that machine, it screws up that kind of measurement. You should always use wall-clock time when dealing with parallel codes. You should also repeat the test multiple times and discard the first few results which the author did not do. It is very standard in parallel programming benchmark. And since the author did not do that, I assume he does not know much about benchmarking. Lots of parallel middleware have high initialization overhead. This tends to be particularly true for intel tools.

  • Re:Measuring pebbles (Score:4, Interesting)

    by T.E.D. ( 34228 ) on Monday November 04, 2013 @05:41PM (#45330453)

    This could be the difference between an hour and 10 minutes for builds of some projects.

    If that's really the delta (one is 600% slower), then something is likely seriously pathologically wrong with one of those two compilers. Submit a bug report (not that it helps you, but it will help someone else).

    But yes, different users in different phases will have different priorities. I'm not laying down an immutable law here, just trying to restore the proper proportion to a situation that we both agree is way out of wack.

  • by TheGavster ( 774657 ) on Monday November 04, 2013 @05:46PM (#45330509) Homepage

    While any user-facing application is going to spend most of its time waiting for the user to do something, the latency to finish that task is still something the user will want to see optimized. Further, if a long-running task tops out at 20% CPU, apparently optimization was weighted too much towards CPU and you need to look into optimizing your IO or memory usage.

  • Re:Crappy benchmark (Score:2, Interesting)

    by Anonymous Coward on Monday November 04, 2013 @07:44PM (#45331487)

    By not initializing the input array, the code's behaviour is undefined. Which makes this a test of what these compilers do with complete garbage source that isn't even a valid C++ program.

  • by VortexCortex ( 1117377 ) <VortexCortex@pro ... m minus language> on Monday November 04, 2013 @10:23PM (#45332543)

    Which one produced the fastest code?

    My current project takes two hours to compile from scratch, and uses around 20% CPU when it runs. So yes, compile time can be more important than how fast the code runs.

    I had a C++ project like that once... It was a tightly coupled scripting language that could be compiled down to the host language if parts needed to be sped up. I noticed that I was mostly avoiding C++ features since they didn't exist ( eg: using multiple inheritance with non-pure virtual base classes -- Which the scripting lang allowed by allowing variables to be virtual ) and implementing them in C instead. So, I ditched C++ and coded to C99 instead. When I got all the C++ out of the codebase (thus making it compilable in either) the compile time dropped from an hour and a half in C++ to 15 minutes in C. Since I absolutely must have scripting and the VM lang optionally allows GC transparently across C or script (by replacing malloc and friends), and it has more flexible OOP (entities can change "state" and thus remap method behaviors (function pointers) making large improvements over jump-tables (switch statements) for my typically highly stateful code: I avoid C++ like the plague.

    In fact, since the scripting language can translate itself into C, I don't touch C much either for my projects unless I'm extending the language itself. Over the years I've ported the "compiled" output to support Java and Perl, and Javascript (and am working on targeting ASM.js). It's grueling work just for my games and OS hobby projects, but I really can't bring myself to use a compilable language that doesn't double as its own scripting language -- That's asinine, especially if it compiles an order of magnitude slower.

    Don't get me wrong, I get the utility of a general purpose OOP language built around the most general purpose use cases possible; However when you design something for everyone, you've actually designed it for no one at all. I'll take a language with full language features applicable to its templating (code generation), like my scripting language (or Lisp) over C++ or Java any day. (Note: Rhino or Lua JIT + Java is a close contender as far as usability goes, but nothing beats native compiled code for my applications' use case.)

    WRT to the "insightful" commenter above: Deadlines are far more important to code being able to run than the distributed minute performance gains on end user systems which are influenced by moore's law. Release date is far more significant: The code has 0% usability if I can't produce it in time. Unfortunately, some project depend on emergent behaviors and thus require fast revisions to tweak (this goes doubly for me, hence the scripting component requirement).

If you want to put yourself on the map, publish your own map.

Working...