IBM Releases Open Source Machine Learning Compiler 146
sheepweevil writes "IBM just released Milepost GCC, 'the world's first open source machine learning compiler.' The compiler analyses the software and determines which code optimizations will be most effective during compilation using machine learning techniques. Experiments carried out with the compiler achieved an average 18% performance improvement. The compiler is expected to significantly reduce time-to-market of new software, because lengthy manual optimization can now be carried out by the compiler. A new code tuning website has been launched to coincide with the compiler release. The website features collaborative performance tuning and sharing of interesting optimization cases."
Few Questions for any programmers (Score:2, Interesting)
Re:Oh really? (Score:5, Interesting)
The last one is actually quite possible, and indeed is a huge area of compiler research.
Re:Few Questions for any programmers (Score:3, Interesting)
vs
Re:Few Questions for any programmers (Score:3, Interesting)
Another very similar one, and one that comes up more commonly, is the replacement of a multiplication or division by a constant by a series of additions, subtractions, and bitshifts.
ARGH! Mod parent down! Please, please, please don't ever repeat this again to people asking things about optimisation. On most modern computers, shifts are slow. They are often even microcoded as multiplications, because they are incredibly rare in code outside segments where someone has decided to 'optimise'. Even when they're not, a typical CPU has more multiply units than shift units and the extra operations needed from the shift and add sequence bloat i-cache usage and cause pipeline stalls by adding adjacent dependencies. The 'optimised' version you describe will almost certainly be slower than the version using the multiply instruction.
I did some benchmarks with a Core 2 Duo a few months back of this exact optimisation and discovered that in the simplest case the add-and-shift version was as slow as the multiply, in any more complex case it was slower. There's a reason why GCC hasn't done this for some years.
Re:Oh really? (Score:4, Interesting)
That kind of confusing summaries are too frequent that sometimes I go to RTFA!
Seriously, the summaries should be subject to moderation too (I don't know if the firehose thing lets do that.)
Re:Automation... (Score:2, Interesting)
See idiocracy. Go out and watch it. I'll wait
The main tenets in Idiocracy were that IQ is hereditary and those with less IQ spend more time procreating. Automation was merely allowing their society to function, barely. IOW, I don't see your point. Can you elaborate, please?
Re:Oh really? (Score:3, Interesting)
While the the summary is wrong on this subject, I can tell you that, yes, manual optimization is part of our work and can slow down the release of our product. If we told a customer that yes, we will be able to do VGA 30FPS H.264 encode. Code optimization on our custom core is going to take some time and effort. I work in the embedded multimedia field.
I think we're going to be very, very interested in this project.