Speed Test: Comparing Intel C++, GNU C++, and LLVM Clang Compilers 196
Nerval's Lobster writes "Benchmarking is a tricky business: a valid benchmarking tries to remove all extraneous variables in order to get an accurate measurement, a process that's often problematic: sometimes it's nearly impossible to remove all outside influences, and often the process of taking the measurement can skew the results. In deciding to compare three compilers (the Intel C++ compiler, the GNU C++ compiler (g++), and the LLVM clang compiler), developer and editor Jeff Cogswell takes a number of 'real world' factors into account, such as how each compiler deals with templates, and comes to certain conclusions. 'It's interesting that the code built with the g++ compiler performed the best in most cases, although the clang compiler proved to be the fastest in terms of compilation time,' he writes. 'But I wasn't able to test much regarding the parallel processing with clang, since its Cilk Plus extension aren't quite ready, and the Threading Building Blocks team hasn't ported it yet.' Follow his work and see if you agree, and suggest where he can go from here."
Funny benchmarks (Score:2, Insightful)
The benchmarks in TFA are a little funny. Why is system time so large while user time so small? The only time I've seen this in real applications is when there is major core contention for resources.
Re:Funny benchmarks (Score:4, Interesting)
This looks like it's testing the compile-time, in which case a large % of time being system time isn't that uncommon. Lots of opening and closing small files, generating temp files, general banging on the filesystem. Can heavily depend on your storage speed: compiling the Linux kernel on an SSD is much faster than on spinning rust.
Re:Funny benchmarks (Score:5, Interesting)
that's normal. There is hyperthreading on that machine, it screws up that kind of measurement. You should always use wall-clock time when dealing with parallel codes. You should also repeat the test multiple times and discard the first few results which the author did not do. It is very standard in parallel programming benchmark. And since the author did not do that, I assume he does not know much about benchmarking. Lots of parallel middleware have high initialization overhead. This tends to be particularly true for intel tools.
Re: (Score:2)
Re:Funny benchmarks (Score:5, Insightful)
I’m testing these with an array size of two billion.
That's all I needed to read to ignore him completely. Completely and utterly pointless. If g++ won, it is likely because it utilised stream intrinsics to avoid writing data back to the CPU cache, which would have freed up more cache, and minimised the number of page faults. This will not in anyway test the performance of the CPU code, it will just prove that your 1333Mhz memory is slower than your 3Ghz processor . This is why you don't profile code (wrapped up in a stupid for loop), but profile whole applications instead. From my own tests (measuring the performance of large scale applications using real world data sets), intel > clang > g++ (although the difference between them is shrinking). The author of the article hasn't got a clue what he's doing. FTA:
Notice the system time is higher than the elapsed time. That’s because we’re dealing with multiple cores.
No it isn't. It's because your CPU is sat idle whilst it waits for something to do.
Re:Funny benchmarks (Score:5, Insightful)
Here's another tipoff that the guy is clueless about benchmarking, talking about a test which does FP math:
I’m not initializing the arrays, and that’s okay
Actually, it's not. This is a bad mistake which totally invalidates the data. Many FPUs have variable execution time depending on input data. There is often a large penalty for computations involving denormalized numbers. If uninitialized data arrays happen to be different across different compilers (and they might well be), execution time can vary quite a lot for reasons completely unrelated to compiled code quality.
It's not limited to FP, either. I remember at least one PowerPC CPU which had variable execution time for integer multiplies -- the multiplier could detect "early out" conditions when one of the operands was a small number, allowing it to shave a cycle or two off the execution time.
The moral of the story: making sure that input data for benchmarks is always the same is very important, even when it's trivially obvious that the code will execute the exact same instruction count for any data set.
Re: (Score:2)
Let alone the fact that on linux an memory page is allocated the first time (ans so placed in memory) it is touched. So if you do not intialize it, the allocation and placement happens during the traversal, which is probably not things you want to time.
Re: (Score:2)
From my own tests (measuring the performance of large scale applications using real world data sets), intel > clang > g++ (although the difference between them is shrinking).
I made lots of expeirments in this area as well. And my overall conclusions was that the intel compiler, the pgi compiler and GCC are all good compilers. But their performance vary significantly depending on what you are compiling. For some other applications you would get different results. Or that within your application some set of functions are more efficiently compiled by the intel compiler while other are better compiled by GCC. It is really difficult to say which is "the best" compiler in term of per
Re: (Score:3)
Re: (Score:2)
Compile time is irrelevant. (Score:4, Insightful)
Re: (Score:3, Insightful)
Which one produced the fastest code?
My current project takes two hours to compile from scratch, and uses around 20% CPU when it runs. So yes, compile time can be more important than how fast the code runs.
Re:Compile time is irrelevant. (Score:5, Insightful)
But over the lifetime of any average, runtime should outweigh compile time by orders of magnitude.
Otherwise, honestly, why bother to write the program efficiently at all?
And if you want to decrease compile times, it's easy - throw more machines and more power at the job. If you want to decrease runtime, then ALL of your users have to do that.
Honestly, if your compile times are that much, and that much of a burden, you need to upgrade, and you also need to modularise your code more. The fact is that most of that compile time isn't actually needed for 90% of compiles unless your code is very crap.
Re:Compile time is irrelevant. (Score:5, Informative)
I have worked on projects that have taken upwards of 8 hours for a full compile. There is a lot of validity behind the business impact of different compilers.
The current mentality of throw more horse power at a problem is not always the practical, or the logical conclusion. If you can improve your overall compile time, it can improve your productive time.
From a Build Engineering perspective, analyzing why it takes time for a project to compile is one of the most important metrics.
Not only do I monitor how long a project takes to compile, but I also keep an active average, and try to maintain highs and lows to identify compile spikes.
We monitor processor(s), disk access speeds, memory loads, build warnings, change size, concurred builds, etc.
We look at all possible solutions. With the current build tools we have, we can either provision another build system for the queue, or if necessary increase memory, or disk space, or faster drives, more processors, or even upgraded software. We have gone as far as home-grown fixes to get around issues until better solutions become available.
All of this needs to be accounted for, so, not only is compile time relevant, but what is CAUSING compile times is relevant.
Re: (Score:2)
Re: (Score:2)
Lipstick on a pig.
Re: (Score:3, Funny)
Re:Four-hour compile times means a 1 day turnaroun (Score:5, Insightful)
Re: (Score:3)
Re: (Score:2)
Although you have a good point, his point is still valid.
Re: (Score:2)
Who's to say you have to recompile everything? Surely if your making a small bugfix you just recompile the files which have changed...
Plus you have tools like distcc, ccache etc
Re: (Score:2)
Honestly, if your compile times are that much, and that much of a burden, you need to upgrade, and you also need to modularise your code more. The fact is that most of that compile time isn't actually needed for 90% of compiles unless your code is very crap.
Hint: I said 'two hours to compile from scratch'. You can't avoid compiling all your source if you just did a clean checkout from SVN into an empty source tree; as you would, for example, before building a release or release candidate.
Re: (Score:2)
Also, as a developer, faster compile times make my life a little less frustrating so I'll be less likely to pull out all my hair while waiting on the computer.
Re: (Score:3)
$ time make clean optim_ifort
make clean optim_ifort 50.16s user 0.86s system 98% cpu 51.731 total
in serial, and it's roughly 3x faster for the parallell cmake build. This is with -O3 and inter-procedural optimizations turned on, generating AVX-tuned code. If I have only edited a file or two, the compila
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
An excessively long build time can inflate development costs if the delay in testing new code becomes prohibitively long. A large codebase that takes 4 hours to build on a slow compiler will force developers to frequently wait over night for test results to come back. If a different compiler can build that code 4x faster you have many more opportunities to observe test results during a work day. Upgrading the build system isn't always an option when you have to support legacy platforms with inherently slow
Re: (Score:2)
How often is the compiler the bottleneck? It seems more likely that optimizing the build process and hardware (particularly by adding more RAM and an SSD) would have a much, much greater effect than switching compiler and at no expense to your run-time performance.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
When you stop using DOS it will become quite relevant actually.
Re: (Score:2)
Re: (Score:3)
The time spent on running code vs compiling code, for me, is like 10000:1, to be optimistic. Compilation time is pretty irrelevant for me and I daresay most users.
Re: (Score:2)
No, in most cases execution time will vary much less than compile time.
Re: (Score:2)
As a gentoo user i don't particularly care about compile times either, i have both ccache and distcc set up to speed up compilation, i build binary packages where i have a large number of identical machines. I can always let compiles run over night and have them ready by the morning, and even that isn't usually necessary with modern hardware.
On the other hand, having the resulting binaries run faster and use less memory either because of better compiler options, or more efficient compile time flags (eg disa
Re:Compile time is irrelevant. (Score:4, Interesting)
While any user-facing application is going to spend most of its time waiting for the user to do something, the latency to finish that task is still something the user will want to see optimized. Further, if a long-running task tops out at 20% CPU, apparently optimization was weighted too much towards CPU and you need to look into optimizing your IO or memory usage.
Re: (Score:2)
His 20% is 1 of 6 cores (16.666%) plus a little bit.
Doesnt seem like anyone here even has a cursory understanding...
Re: (Score:3)
You're IO bound. Get a real disk subsystem.
Re: (Score:2)
Re:Compile time is irrelevant. (Score:5, Interesting)
Which one produced the fastest code?
My current project takes two hours to compile from scratch, and uses around 20% CPU when it runs. So yes, compile time can be more important than how fast the code runs.
I had a C++ project like that once... It was a tightly coupled scripting language that could be compiled down to the host language if parts needed to be sped up. I noticed that I was mostly avoiding C++ features since they didn't exist ( eg: using multiple inheritance with non-pure virtual base classes -- Which the scripting lang allowed by allowing variables to be virtual ) and implementing them in C instead. So, I ditched C++ and coded to C99 instead. When I got all the C++ out of the codebase (thus making it compilable in either) the compile time dropped from an hour and a half in C++ to 15 minutes in C. Since I absolutely must have scripting and the VM lang optionally allows GC transparently across C or script (by replacing malloc and friends), and it has more flexible OOP (entities can change "state" and thus remap method behaviors (function pointers) making large improvements over jump-tables (switch statements) for my typically highly stateful code: I avoid C++ like the plague.
In fact, since the scripting language can translate itself into C, I don't touch C much either for my projects unless I'm extending the language itself. Over the years I've ported the "compiled" output to support Java and Perl, and Javascript (and am working on targeting ASM.js). It's grueling work just for my games and OS hobby projects, but I really can't bring myself to use a compilable language that doesn't double as its own scripting language -- That's asinine, especially if it compiles an order of magnitude slower.
Don't get me wrong, I get the utility of a general purpose OOP language built around the most general purpose use cases possible; However when you design something for everyone, you've actually designed it for no one at all. I'll take a language with full language features applicable to its templating (code generation), like my scripting language (or Lisp) over C++ or Java any day. (Note: Rhino or Lua JIT + Java is a close contender as far as usability goes, but nothing beats native compiled code for my applications' use case.)
WRT to the "insightful" commenter above: Deadlines are far more important to code being able to run than the distributed minute performance gains on end user systems which are influenced by moore's law. Release date is far more significant: The code has 0% usability if I can't produce it in time. Unfortunately, some project depend on emergent behaviors and thus require fast revisions to tweak (this goes doubly for me, hence the scripting component requirement).
Re: (Score:2)
Ah, another satisifed Boost user I see.
Re: (Score:2)
Besides if it's spending 80% of the time idle, then the program is waiting for the user not the other way around.
Bingo. When the software is waiting for something to do 80% of the time, and nothing else of any importance is running on that machine, optimization is pretty much irrelevant; at best it would save a tiny amount of power by slightly reducing CPU usage.
Re: (Score:2)
And how often will nothing else be running on the machine?
These days a large proportion of servers run under hypervisors, and that 80% of idle time will be used by other virtual machines running on the same physical hardware. If you make your code more efficient, then you can consolidate more functions onto the same hardware which could result in significant cost savings.
And even on single standalone machines, modern powersaving functions will mean that far less power is used during the 80% idle periods, an
Re: (Score:2)
Besides if it's spending 80% of the time idle, then the program is waiting for the user not the other way around.
Bingo. When the software is waiting for something to do 80% of the time, and nothing else of any importance is running on that machine, optimization is pretty much irrelevant; at best it would save a tiny amount of power by slightly reducing CPU usage.
Yeah, and my work laptop backup software is idle 80% of the time...except for those 4 hours every Friday when the disk utilization pegs to 100% and it starts taking several minutes just to switch to a different folder in Outlook, with no other programs open...and if I need to use SQL Developer or Access or something, it's gonna have to wait for Monday!
Nobody cares how much time your program sits there waiting for someone to push the button. They care about how quickly it reacts once you push that button. Ju
Re:Compile time is irrelevant. (Score:5, Insightful)
Which one produced the fastest code?
It doesn't matter. It may matter which one compiles your code faster. Depending on your use of things like templates, classes, etc. that may be a different compiler than the best for the benchmarks. But even that is unlikely to matter much. I doubt if their is much more than a few percentage difference. More important are issues like standard compliance, good warning messages, tool-chain/IDE integration, etc.
DIe Buisness Intelligence DIE (Score:5, Interesting)
What on earth does compiler benchmarking have to do with the BI section of slashdot?
Furthermore, why on earth are you idiots creating a blurb on the main screen that just links to a different slashdot article? Its such terrible self promotion. Just freaking write the main article as the main article. No need to make it seem as if the Buisness Intellegence section is actually worth reading, its not.
Re:DIe Buisness Intelligence DIE (Score:4, Insightful)
Oxymoronic.
Measuring pebbles (Score:5, Insightful)
Interesting info, but I have a couple of issues:
First off, why wasn't Microsoft's C++ compiler included in this? That's the one we use at work, so that's the one I'd really like compared to all those others. Are we the only ones still using it or something?
More importantly, why on earth was compilation speed the only thing compared? I mean, I suppose its nice for g++ users to know that their 10 minute compiles would have been 2 minutes longer if they used the Intel compiler, but Intel users might not really care if they believe their resulting code is going to run faster. Speed of compilation of optimized code is a particularly useless metric, because different compilers have different definitions of "unoptimized", so its guaranteed you aren't comparing apples to apples.
I suppose compilation speed is a nice metric to brag about between compiler writers. But for compiler users, the most important things are roughly these, in order: Toolchain support, language feature support (eg: C++2012/14 features), clarity of error/warning messages, speed of generated code (optimization), and lastly speed of compilation. I'm not really sure why you took it upon yourself to measure the least important factor, and only that one.
Re: (Score:2, Informative)
Your pre-elementary reading and comprehension skills leave much to be desired.
It’s interesting that the code built with the g++ compiler performed the best in most cases, although the clang compiler proved to be the fastest in terms of compilation time.
Just in case you didn't get that: They did benchmark the resulting binaries, and g++ made the best ones.
Re:Measuring pebbles (Score:5, Interesting)
OK. Much abashed, I went back through the article.
It turns out that there are numbers for an actual code benchmark. Its found about 2/3rds of the way through the report, in the third graph (untitled), after the balance of the text had already been devoted to compilation speed comparisons. Also, it only listed 2 of the 3 compilers, half the data was for unoptimized code (and thus useless), and it was hidden behind a sign that said "Beware of leopard".
OK, perhaps I made that last part up.
For the curious, the difference in at least that one table was never more than 5%. In my mind, hardly a differentiator unless you are doing heavy number crunching or stock trading programs.
Perhaps the remaining 1/3rd is all about more important things? I've lost interest. You're right. I'm weak.
Re: (Score:2)
Re: (Score:2)
Maybe because most people want Microsoft out of the picture and conveniently choose to ignore it rather than lend credence to it?
Re:Measuring pebbles (Score:4, Interesting)
This could be the difference between an hour and 10 minutes for builds of some projects.
If that's really the delta (one is 600% slower), then something is likely seriously pathologically wrong with one of those two compilers. Submit a bug report (not that it helps you, but it will help someone else).
But yes, different users in different phases will have different priorities. I'm not laying down an immutable law here, just trying to restore the proper proportion to a situation that we both agree is way out of wack.
Re: (Score:2)
In GCC's case, there are
Re: (Score:2)
http://en.wikipedia.org/wiki/Windows_3.1x#DR-DOS_compatibility [wikipedia.org]
See also post below: "Internet Explorer is a crucial part of the OS! It just happens to be really convenient for smothering our competitors, too."
Oh, and while we're at it: http://redmondmag.com/articles/2013/08/22/windows-8-security-issues.aspx [redmondmag.com]
So there are arguments for this, I'm sure, but it rankles me. A computer should be first and foremost under the control of its owner--in the case of PCs, the end user.
QED.
Crappy benchmark (Score:5, Informative)
The code in the benchmark runs a parallel for over a 10 billion element array but in steps of 100 elements.
It's going to be limited by the creation and destruction of threads.
Also, by not initializing the input array, the floating point arithmetic is vulnerable to eventual denormal values.
Re: (Score:2, Interesting)
By not initializing the input array, the code's behaviour is undefined. Which makes this a test of what these compilers do with complete garbage source that isn't even a valid C++ program.
This benchmark is pointless (Score:5, Informative)
I am a scholar and study parallel computing. These benchmarks are pretty much pointless. You can not make any conclusions out of these results. Here the author take the time whole time of the execution for the creation of the process to its destruction. That means that are included lots of overhead which would be included in startup time in a real application.
There is also apparently no thread pinning to computational cores. This is known to make a HUGE difference.
Then the authors compared cilk result. cilk is known to be slow for simple codes that do not require workstealing and have complex dependencies. For the record, I know they are also comparing TBB. But TBB is implemented on top of the cilk engine in the intel compiler (I don't know about gcc).
In these results hyperthreading is enabled. The proper use of hyperthreading is complicated. There are some problems where it helps, other where it harms, and I would not be surprise that this behavior be compiler dependent.
Finally, it is almost impossible to compare compilers. On different platforms, with the same compilers you will get different results. Some functions are better compiled by one compiler and some functions are better compiled by the other compiler. This has been reported over and over and over again.
If you care about performance, you should not rely on what your compiler is doing in your back. You need to know what it is doing. Depending on memory alignment (and what the compiler knows about it), depending how the vectorization happen, depending on potential memory aliasing you will get different results.
If you care about performance, you need to benchmark and you need to optimize and you need to know what the compiler does.
Re: (Score:2, Insightful)
I am a scholar and study parallel computing.
aka I'm a second year computer science student.
No, give the guy a break... English is not his first language. You can tell from the "what your compiler is doing in your back", instead of "behind your back", that sort of thing. From timezone, European seems most likely... from the sentence structure... French?
Re: (Score:2)
I am a scholar and study parallel computing.
aka I'm a second year computer science student.
No, give the guy a break... English is not his first language. You can tell from the "what your compiler is doing in your back", instead of "behind your back", that sort of thing. From timezone, European seems most likely... from the sentence structure... French?
Indeed, godrik's English is quite fluent. It's just those few subtle points that give away that they're probably not a native speaker.
And well picked, AC: godrik has mentioned living in France in the past, so may very well be a French speaker.
Of course, we could just ask... but where's the fun in that?
Re: (Score:2)
You are indeed correct, I am french. And for the first AC that replied to me, I was a 2nd computer science student something like 10 years ago.
I am currently a CS professor in a US university and I have been doing low level performance study on various architecture (intel xeon, nvidia GPU, recently Xeon Phi, distributed memory machine) for the last 4 years. So I might not know everything about performance benchmarks, but clearly the methodology of the original article is flawed. I would whip (figuratively o
Re: (Score:2)
TBB is *not* implemented using the Cilk Plus runtime, either in the Intel compiler or in the Cilk Plus branch of GCC. TBB is implemented using a completely separate runtime from Cilk Plus. You can take my word for it that I know what I'm talking about, or you can confirm it by studying the sources online, since they are both publicly available. :)
Interesting. I never looked at how it is implemented by ICC. But my understanding of it is that (some parts of) TBB used a workstealing engine for execution and that it was reusing a significant portion of the cilk runtime. I might have understood wrong.
Pinning threads to cores can help on some benchmarks, but it is less useful for others. In particular, for codes implemented in TBB or Cilk Plus, which use work-stealing schedulers, the performance benefits of pinning can be modest, almost negligible, or sometimes even hurt performance.
Well, the point of pinning is to increase memory locality. Workstealing engine typically try to keep things local to avoid that problem. So you would rather have little migration. Now that I think about it more. Cilk Plus tends to create more threads than co
Intel C++ produced fastest code for us (Score:5, Insightful)
for complilers (Score:3)
Cilk Extensions and Clang (Score:2)
In our times (Score:2)
Re:first post (Score:5, Funny)
compiled with clang
Re: (Score:2, Funny)
man, it took a long time to read it.
GCC post (Score:2)
I could have beaten him with my highly optimized GCC-devel compile of "FirstPost.cxx",
but I didn't quite understand the error message regarding the templates.
Re: (Score:3)
Re:first post (Score:4, Funny)
first ++pre
Re:first post (Score:5, Funny)
{
if(a < b)
printf("I got first post!");
else
printf("No, I got first post!");
}
int main(int argc, const char** argv)
{
int i = 0;
// What prints out here?
FirstPost(i++, i++);
}
Re: (Score:3)
Assuming typical C calling convention.... "No, I got first post" will be printed, where a will be 1 and b will be 0 in the call to FirstPost. This is because generally, final arguments are evaluated and pushed onto the stack before earlier ones.
Although typically, the standard may say this behavior is undefined, in practice, almost all modern C compilers will produce the output I've described here.
Re: (Score:3)
Clang will just issue a warning that you are making multiple unsequenced modifications. This is undefined in the C spec and the compiler just increments i sequently printing "I got first post!." Sequence points like this are hard to clarify for all cases which is why the C99 spec leaves it undefined. In C11 a detailed memory model has been created which should define most cases. http://en.wikipedia.org/wiki/C11_(C_standard_revision) [wikipedia.org]
Confirmed with:
Configured with: --prefix=/Applications/Xcode.app/Conten
Re: (Score:2)
Pretty sure there is no intention whatsoever of turning that into defined behavior.
Re: (Score:2)
I am not at all convinced about this "almost all modern C compilers", given how many will do fairly awesome things once they determine that the behavior is undefined.
Re:first post (Score:5, Informative)
If it were just up to the order of evaluation of the function arguments, then it would be unspecified. However, the program also modifies the same object twice without an intervening sequence point, and that puts it into undefined behavior territory (6.5/2, C99 draft standard [open-std.org]).
Re: (Score:2)
No, the ++ operation will take place before the next sequence point (super important concept! If you do not fully grok sequence points, you are not really programming C). The end of a statement is one sequence point, a function call is another sequence point.
Here you have two modifications to i before that, and that is what is invoking undefined behaviour (in the same way i = array[i++]; is also undefined behaviour since i is modified twice before the end of the statement).
Re: (Score:2)
Actually, you don't strictly need sequence points to determine the order of events, sometimes they can be guaranteed simply by the defined order of operations, even if they result in side effects.
Consider the statement x = a[i++] + b[i++].. This should be equivalent to temp = a[i++], temp = temp + b[i++], x = temp, because the the order in which the + operator evaluates its operands is determined by the standard. Even without sequencing points.
But the initial example, where one passes in an argume
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Don't use templates. Period.
Re: (Score:2)
Re: (Score:2)
I'm skeptical about that. Wasn't Intel's compiler supposed to produce the fastest binaries - on Intel machines, at least?
Re: (Score:2)
So, if it is fails to be best in one case, it is therefore suboptimal in all other cases? Guess we should un-launch all the satellites since a few of them were damaged on the ground, and tell the Mars rover to power down, since other Mars missions have had problems.
Re: (Score:3)
The main claim for g++ for a very long time was "while it does not optimize much or support all of the language, it is FREE".
I have never heard that claim, maybe because it isn't true. g++ has always been one of the best at language support. It has not always been the best at low-level processor specific optimizations, but it has made up for that by being really good at higher level optimizations, like recognizing unused code, inlining, and code hoisting. I haven't seen a better compiler at any price.
Re:future compiler trends (Score:5, Informative)
it has made up for that by being really good at higher level optimizations
Heh, heh, heh, don't remember the great EGCS split of '97, do you sonny? Yep, us old timers knew that gcc was a dog of an optimizer, but them EGCS whippersnappers fixed it, and even got the fork accepted as the official gcc. Remember, you probably got to where you are today by running over the body of some crusty old-timer.
Re:future compiler trends (Score:4, Informative)
The main problem back then was X86 optimizations. Not the high level optimizations although it was lacking in those too. Eventually they started porting the code to use GIMPLE and moved most of the optimizations away from the language dependent trees to the GIMPLE language independent code. This was done before LLVM was even popular.
Re:future compiler trends (Score:5, Informative)
No doubt that gcc is a damned good compiler these days, at least in terms of the quality of code produced, if not the speed at which the compiler runs. My point was just that it wasn't always so. Back then gcc was considered a toy compared to some of the commercial compilers, and it was. Thankfully the EGCS people did a lot to change that, and got the ball rolling for future improvements.
Re: (Score:2)
Most C and C++ compilers were completely atrocious on x86 even as little as 15 years ago. "Optimization" meant a little bit of non-extensive peep-hole.
Re: (Score:2)
More than likely your main gcc use was for Mac or iOS applications and you changed compiler because you can't even figure out how to change the defaults in XCode. Having tried both with my applications I can perfectly tell that clang is not up to snuff. Sure it compiles quickly and the syntax errors have color highlighting but the quality of the code, in terms of execution speed or size, it produces is vastly inferior.
Re: (Score:2)
You are an idiot.
NO U
No, _I_ am Idiotus!
Re: (Score:2)
I thought that was something people used back when MS-DOS was a popular OS was not even aware the product still existed.
Re: (Score:2)
I am talking about Watcom C++ of course.
Re: (Score:3)
I thought that was something people used back when MS-DOS was a popular OS was not even aware the product still existed.
I am talking about Watcom C++ of course.
It was open sourced [openwatcom.org] some time ago. Now it supports Linux (to some extent) and some other CPU architectures.
It can still make DOS/4GW exes, though. Ahh, nostalgia.
Re: (Score:2)
Yah, besides missing compiler flags, how does it perform on different intel processors, how about different AMDs?
Plus, the huge system times seems to indicate this more a kernel test than a compiler one.
Sorry, AC, I will have to let go my positive mod point to you so I can reinforce what you've said. Next time, please consider making an account so you can escape the Score: 0 limbo when you post on Slashdot :(
Since Intel has been caught red-handed crippling AMD processors [wikipedia.org] on code produced by Intel C++ Compiler, I think that testing on Intel and AMD processors should the duty of every single compiler benchmark -- that is posted in Slashdot, at least.