IBM Releases Open Source Machine Learning Compiler 146
sheepweevil writes "IBM just released Milepost GCC, 'the world's first open source machine learning compiler.' The compiler analyses the software and determines which code optimizations will be most effective during compilation using machine learning techniques. Experiments carried out with the compiler achieved an average 18% performance improvement. The compiler is expected to significantly reduce time-to-market of new software, because lengthy manual optimization can now be carried out by the compiler. A new code tuning website has been launched to coincide with the compiler release. The website features collaborative performance tuning and sharing of interesting optimization cases."
Automation... (Score:2, Insightful)
... can create stupid humans. Let's embrace technology but beware of falling into ignorance.
Re: (Score:3, Informative)
I fail to see how automation leads to lower IQ scores. Care to elaborate? How does stepping up the pace, and eliminating tedious, mundane jobs, lead to a lesser society? I call FUD.
wooooosh.
See idiocracy. Go out and watch it. I'll wait
Saw it? Good, now you should get the joke.
Re: (Score:3, Informative)
As for the GP, I believe he is mixing two things incorrectly:
fail to see how automation leads to lower IQ scores
and
lead to a lesser society
Lower IQ scores don't immediately mean a lesser society, but if you take the thinking out of a process and let a process/machine/program do all the thinking, your mind will inevitably get lazy a
Re:Automation... (Score:4, Insightful)
if you take the thinking out of a process and let a process/machine/program do all the thinking, your mind will inevitably get lazy and your work will suffer over time
I think that it could very well free your mind to think about better things. Build systems are a good example. If I had to manually compile each translation unit, I couldn't spend as much time thinking about the code.
Re: (Score:2, Insightful)
Re: (Score:2)
As for the supposedly huge performance improvement of 18% (that's all?!), I have regularly hand-optimised code that ran more than twice as fast.
True. I've read that a hand optimization of less than 50% sometimes isn't worth a developer's time, because users won't really notice it. (Obviously that doesn't apply to situations were a ton of small optimizations are needed, and the application will speed up over time.)
Re: (Score:3, Insightful)
Abstraction is one of the foundations of higher thinking. There is something to be said for being able to do lower-level tasks, but you don't concern yourself with the internals of them when you want to treat them as discrete objects. Nobody thinks about the construction of an AND gate when they're designing something that uses AND gates. Nobody thinks about the internal workings of a method or function when they simply want to call it. In every area, the process is the same: You first learn the basic comp
Re: (Score:2)
I said "simply want to call a function". If you're debugging, then you look at it. If you're designing code, you just use the function. The very existence of a function instead of endlessly repeated code is an example of the principle of abstraction.
Re: (Score:2)
You probably aren't as good at mental arithmetic as someone who's had to do math without calculators though. The question is whether that is a problem. I think the non sequitur here is the subtle implication that not possessing certain skillsets, such as mental arithmetic, would lead to humans becoming lazy and eventually the downfall of society.
I'd say that by examining the average person's mastery of stabbing a sharp stick into a neighbourhood critter and then making food out of it vs. the "lesserness" of
Re: (Score:2)
You probably aren't as good at mental arithmetic as someone who's had to do math without calculators though. The question is whether that is a problem.
Or at remembering phone numbers as someone who doesn't rely on their cell phone's address book.
The bottom line is that simple exercises tasks "oil" the brain, keeping it functioning smoothly, and ready for when you do need to be creative.
Re: (Score:2)
That doesn't follow either; repeatedly performing some task familiarizes your brain to that particular task. While performance in similar tasks (say, remembering people's zip codes for someone who have had to remember phone numbers as in your example) might be improved it doesn't necessarily make one more creative in general.
idiocracy... (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Oblig. XKCD: 603: Idiocracy [xkcd.com]
Re: (Score:2, Interesting)
See idiocracy. Go out and watch it. I'll wait
The main tenets in Idiocracy were that IQ is hereditary and those with less IQ spend more time procreating. Automation was merely allowing their society to function, barely. IOW, I don't see your point. Can you elaborate, please?
Oh really? (Score:5, Insightful)
Oh, so new software takes too long to build because of lengthy manual optimization? That's news indeed. Even if it did, will the compiler find a better polygon intersection algorithm for me? Will it write a spatial hash? Will it find places when I am calculating something in a tight loop and move the code somewhere higher?
Re:Oh really? (Score:5, Interesting)
The last one is actually quite possible, and indeed is a huge area of compiler research.
Re: (Score:2)
will the compiler find a better polygon intersection algorithm for me? Will it write a spatial hash? Will it find places when I am calculating something in a tight loop and move the code somewhere higher?
The real question in everybody's mind is: will it blend? [willitblend.com]
Re:Oh really? (Score:4, Funny)
Oh, so new software takes too long to build because of lengthy manual optimization?
It depends on your definition of optimization.
In my current project we have about twenty guys "performing lengthy manual optimizations". It sounds quite better than having twenty guys "correcting the absolute crap that wouldn't even compile".
Re: (Score:2, Insightful)
Re: (Score:2)
Will it find places when I am calculating something in a tight loop and move the code somewhere higher?
Look up loop-invariant code motion. This has been supported in shipping compilers for a while.
Re:Oh really? (Score:5, Informative)
Read the article, that's not what this does. This is a project to automatically generate optimising compilers for custom architectures. The summary is a little unclear :-(
It reduces time to market because you don't have to spend ages making an optimising compiler for your custom chip.
Re: (Score:3, Insightful)
Thanks. That is very, very different from what the summary says.
Re:Oh really? (Score:4, Interesting)
That kind of confusing summaries are too frequent that sometimes I go to RTFA!
Seriously, the summaries should be subject to moderation too (I don't know if the firehose thing lets do that.)
Re: (Score:2)
There's this thing called algorithm recognition (Score:2)
That's news indeed. Even if it did, will the compiler find a better polygon intersection algorithm for me? Will it write a spatial hash?
I TA'ed a course called Contract-based Programming (which was about Hoare triples and JML, a java extension which does checking of pre-/postconditions and invariants).
I noted that the lecturer had a book on his shelves titled "Algorithm recognition". I speculate that it might talk, for instance, about how to recognize bubble sort and replace it with quicksort. Or how sorted(list)[0] might be replaced by min(list), or how sorted(list)[4] might be replaced by quickselect(list, 4).
I don't know what state of
Re: (Score:2)
>>Oh, so new software takes too long to build because of lengthy manual optimization?
I indeed spend 18% of my coding time typing "gcc -O3".
Re: (Score:3, Informative)
No, this 'learning' compiler only learns how to optimally translate C++ statements to machine level operations. It cannot choose high level algorithms for you. And the reason that such a learning compiler is useful is not to help lazy application programmers, but because developing new, optimised compilers for the many different processors and platforms out there (think computers, mobile phones, embedded systems, etc) is time consuming.
Re: (Score:2, Funny)
Yes. That's why most manuals are not very optimized. So the next time you think a manual is close to useless, don't complain. It's in order to save you time in the building process.
Re: (Score:3, Interesting)
While the the summary is wrong on this subject, I can tell you that, yes, manual optimization is part of our work and can slow down the release of our product. If we told a customer that yes, we will be able to do VGA 30FPS H.264 encode. Code optimization on our custom core is going to take some time and effort. I work in the embedded multimedia field.
I think we're going to be very, very interested in this project.
Re: (Score:2)
Will it find places when I am calculating something in a tight loop and move the code somewhere higher?
No, and it doesn't need to, since vanilla GCC has had that optimisation for years.
Re: (Score:2)
Will it find places when I am calculating something in a tight loop and move the code somewhere higher?
Quite likely, yes.
Even a dumb optimizer will love loop invariant code outside of a loop, and maybe partially unroll the loop to make the looping overhead less. The latest gcc will even automatically vectorize the loop for you to execute a number of iterations in parallel using SSE/etc instructions if it's a suitable candidate.
e.g.
You may write:
Oblig. (Score:3, Funny)
Re: (Score:2)
You do realise what you've gone and done, don't you? They'll have to change the GCC acronym to mean the Gnu Creative Compiler, now.
Less time? How about same time, better product? (Score:4, Insightful)
The compiler is expected to significantly reduce time-to-market of new software, because lengthy manual optimization can now be carried out by the compiler.
How about this: The coders take the time they would have used to "optimize" and instead better document, test, and debug the code. Instead of same quality, less money, make it better quality, same money? You know that the developer isn't going to charge less money for a new product because it took them less time to get it out the door.
Re: (Score:2)
Instead of same quality, less money, make it better quality, same money?
Yes, that always works.
I'm asking my client right now whether he wants a quality product in february or a barely working one in october. Let's see what happens.
Re: (Score:2)
Dumb Summary (Score:5, Insightful)
automatically learn how to best optimise programs for re-configurable heterogeneous embedded processors
That's kinda important to mention no?
Re: (Score:3, Insightful)
Re: (Score:2)
automatically learn how to best optimise programs for re-configurable heterogeneous embedded processors
That's kinda important to mention no?
Well, it could be optimizing for unconfigurable homogeneous strawberry pudings.
It'd be quite more impressive, from a culinary standpoint.
Re: (Score:2)
Few Questions for any programmers (Score:2, Interesting)
Re:Few Questions for any programmers (Score:5, Informative)
What things can a compiler do to your code to 'optimize' it for you?
Check out the Wikipedia article [wikipedia.org] on optimization for some examples.
In brief, some of the more common ones are things like substituting known values for expressions (e.g. x = 3; y = x + 2; can be changed to x = 3; y = 5;), moving code that doesn't do anything when run repeatedly outside a loop, and architecture-specific optimizations like code scheduling and register allocation. (E.g. with no -O parameters, or -O0, for something like "y = x; z = x;" GCC will generate code that loads "x" from memory twice, once for each statement. With optimization, it will load it once and store it in a register for both instructions.)
If the compiler tries to do this, wouldn't it likely screw your code up?
There are cases where optimizations will screw something up. One example is as follows. It's considered good security practice to zero out memory that held sensitive information (e.g. passwords or cryptographic keys) to limit the lifetime of that data. So you might see something like "password = input(); check(password); zero_memory(password); delete password;". But the compiler might see that zero_memory writes into password, but those values are never read. Why write something if you never need it? So it would remove the zero_memory call as it's useless code that can't affect anything. So it removes it. And your program no longer clears the sensitive memory.
This was actually a bug in a couple crypto libraries for a while.
Re: (Score:1)
Re: (Score:2)
I see, thanks for the detailed explanation, I hadn't thought of those. I like the first example, and the fact that the compiler can recognize your code and make that replacement confidently. Thats just cool.
Any compiler from the late 80ties will do that, but that is not really a safe optimization. If your optimization is safe, optimized and unoptimized versions should produce the same output, even if they calculate it in different ways. However in the given example there is no guarantee when (non-optimized) y=x+2 is executed, x will still be 3 (and y will still be assigned 5.) Optimized version always assigns 5. So the optimized and unoptimized versions might do different things. Even if two assignments follo
Re: (Score:1, Informative)
There are cases where optimizations will screw something up. One example is as follows. It's considered good security practice to zero out memory that held sensitive information (e.g. passwords or cryptographic keys) to limit the lifetime of that data. So you might see something like "password = input(); check(password); zero_memory(password); delete password;". But the compiler might see that zero_memory writes into password, but those values are never read. Why write something if you never need it? So it would remove the zero_memory call as it's useless code that can't affect anything. So it removes it. And your program no longer clears the sensitive memory.
And it was the crypto library's fault, not the compiler's fault. Most languages worthy of doing crypto programming in have a facility to say ~"don't optimize this". An example: in C, the keyword "volatile" instructs the compiler that the field may be changed at any time, and thus all reads/writes must take place and must do so atomically [unfortunately, the C spec doesn't specify "in order" for volatile fields, but I digress].
Re: (Score:1, Informative)
The C spec doesn't require atomicity of volatiles, but it does require in order. So you've got it the wrong way round!
Re: (Score:2)
Re:Few Questions for any programmers (Score:5, Insightful)
it optimizes the translation of to assembly opcodes. When you code the stuff you type is not in the binary that's compiled/assembled/linked.
I highly recommend you add a tiny amount of assembly programming dabbling to that list, and you will gain better understanding of how compiler optimization is not a simple affair. There are many ways to do the same thing.
As for an example of a basic optimization method, removing dead code, code that is in there but never called by the main method.
Another one is vector optimization, where certain routines or parts of routines where it's suitable use the vector units of a cpu to speed things up a little.
Re: (Score:2)
bah, slashdot ate part of what I wrote, in the first line it is meant to be.
it optimizes the translation of "insert high level language here" to assembly opcodes
Re: (Score:1)
Re:Few Questions for any programmers (Score:5, Informative)
in regards to learning assembly, if you run linux, the best book I can recommend is Programming from the ground up [gnu.org] it's licensed under the GNU free documentation license, and in my honest opinion is likely the single best book for anyone who has no idea that wants to start, I already had some clue so skipped the first two thirds of the book, but read it for shits and giggles later and found it to be a very easy to grasp book.
To this day if I forget minor details about things I pick that back up and re-read it a bit :)
Re:Few Questions for any programmers (Score:4, Insightful)
Assembly itself is not "hard". The language itself is simple. I'd argue that most of the "hardness" is due to its simplicity. There is almost none of the abstract structures and methods that high level languages provide you, and even for something something as "simple" as calling a function, you'll have to manually push data on a stack, jump to the new location, and then pop back the data afterwards, etc.
Might be unnecessary for those programmers who has no interest in understanding how the computer actually works, but it's worth a look.
Disclaimer: I've never really done any assembly programming, but only "dabbled" in it for a bit a few years ago.
Re: (Score:3, Insightful)
I agree that it's a good idea to learn assembly / machine language to understand what a compiler is doing, but learning the assembly language of the computer you use at home is not as reasonable a suggestion today as it once was. Learning to code to a 6802 wasn't bad; it only has a few instructions, and it's very instructive (and fun) to find out how many things you can do with just those. I think trying to write for your home PC in assembly is now beyond a beginner exercise though.
Microcontroller manufac
Re: (Score:3, Informative)
Anybody out there know a good emulator for teaching assembly programming?
SPIM [wisc.edu] is a possibility. It was used in a few courses (operating systems, compilers) at UCB some years ago. (Don't know if it's still used.)
Re: (Score:2)
Yeah, it's still used. Of course, being a MIPS emulator, it's not exactly going to turn you into an amazing x86 optimizer, but it's a good ISA for learning simple assembly.
BeebEm (Score:2)
BeebEm?
The BBC Micro came with an in-line assembler in the BASIC that shipped with the machine. The manual that came with it had a full reference for BASIC and 6502 assembler. It was a great machine for learning about computers ; lots of languages available, BASIC and assembler out of the box, and so many and varied I/O ports it was a hardware hackers dream as well. I remember the first time I patched in a routine that made the on board sound generate "key ticks" for each keystroke and being thrilled.
BBC BA
Re: (Score:2)
"Anybody out there know a good emulator for teaching assembly programming?"
CorePy (www.corepy.org), while not an emulator, is probably the easiest way to learn assembly. It's a complete environment for assembly-level programming using Python and supports all the major platforms (x86[_64]/SSE, PPC/VMX, Cell SPU, ATI GPUs).
Instead of using inline assembly, CorePy represents all assembly instructions as Python objects, leading to a very natural syntax and also enabling some really interesting methods for gene
Re: (Score:3, Interesting)
vs
Re: (Score:2, Informative)
Replace a mod (e.g. x % 32) with a bitwise-and (e.g. x & 31) when the divisor is a power of two.
Another very similar one, and one that comes up more commonly, is the replacement of a multiplication or division by a constant by a series of additions, subtractions, and bitshifts.
For instance, "x/4" is the same as "x>>2", but the division at one point in time (and still with some compilers and no optimization) would produce code that ran slower. Some people still make this optimization by hand, but I
Re: (Score:2)
(You can combine operations too. x*7 is the same as x3-x, x*20 is the same as x4 + x2, etc.)
That should be
x*7 == x << 3 - x
x*20 == x << 4 + x << 2
Slashcode (somewhat reasonably) ate my <<s.
Re: (Score:1)
No, I just got the precedence wrong. ;-)
In my post, pretend that << has a higher precedence than + and -.
Re: (Score:3, Interesting)
Another very similar one, and one that comes up more commonly, is the replacement of a multiplication or division by a constant by a series of additions, subtractions, and bitshifts.
ARGH! Mod parent down! Please, please, please don't ever repeat this again to people asking things about optimisation. On most modern computers, shifts are slow. They are often even microcoded as multiplications, because they are incredibly rare in code outside segments where someone has decided to 'optimise'. Even when they're not, a typical CPU has more multiply units than shift units and the extra operations needed from the shift and add sequence bloat i-cache usage and cause pipeline stalls by addi
Re: (Score:3, Informative)
No that's not true. A shift instruction has a one cycle latency and 1/2 cycle throughput on the Core2 / Core2-Duo. An add instruction also has a one cycle latency and 1/3 cycle throughput on the Core2-Duo.
The integer multiplier on the Core2-Duo has a 4-cycle throughput and an 8-cycle latency. So in a "simple" case like x*9 = (x<<3)+x the optimisation would take 2 cycles, and the straight mul would take 8. In more complex cases the individual shifts will pipeline for more of a benefit. Only in cases wh
Re: (Score:2)
Note that microbenchmarks here don't tell the whole story, because the increase in cache churn, register pressure, and inter-instruction dependencies also slow things down. When you issue a set of shift and add instructions, each one has to complete, in order, before the next can start. With a multiply, this can be overlapped with load and store operations. A well-designed microbenchmark will show this to some degree, but in code where the multiply is close to other instructions it becomes even more obv
Re: (Score:2)
You've misunderstood what I said - there is a benefit from independent shifts being pipelined. So consider the case where I want to co
Re: (Score:2)
If I use a movl to copy the value into a second register
At which point we're talking about inline assembly at least, not a "simple" in-compiled-language optimization, e.g. C mul expr -> C shift/add expr, which is what the entire preceding thread has been talking about. And if you're talking about writing code in assembly, that also has no place in a thread about *compiler* optimizations. :)
Never mind that you're now using another register, which depending on the specific circumstances, may be a bad thing, e.g. the compiler might find a better use for that re
Re: (Score:2)
What you say is true for inlining low-level assembly optimisations. But that wasn't actually my point. I'm not writing code in a "high" level language like C with inline fragments - I'm writing a code generator for a compiler. So everything that you say about the compiler knowing the architecture better than the programmer applies. But I'm checking those assumptions to tune how the backend generates code.
The example that I mentioned comes up when writing multi-precision multiplication routines like the low-
Re: (Score:2)
Yea, I had a feeling what you were talking about was either in asm work itself, or most likely an optimization done *within* some kind of compiler or special-purpose context.
If I had looked at the posts below yours before responding to yours, I would have realised you weren't the only one to go "offtopic", strictly speaking. Had I noticed you weren't the only one to start drifting, I probably wouldn't have bothered to say anything.
My comment was really aimed at readers like the OP, so they knew this was no
Re: (Score:2)
On most modern computers, shifts are slow. They are often even microcoded as multiplications.
You're right to say that recoding a multiplication as a combination of adds and shifts is likely to be a loss since multiplication is so fast, and since the extra instruction fetches (memory accesses, decode overhead) are going to kill it!
However, you're wrong about shifts being slow. Ever since the early days shifts are implemented by a "barrel shifter" can can shift by an arbitrary N bits in a single clock cycle.
Re: (Score:2)
I assume you mean on x86 architecture. There are architectures where it's faster to do a shift, ARM being a very popular one in number of cores sold. On ARM, operands pass through a barrel shifter that allows them to be shifted almost any which way during instruction execution.
Thus, a lone shift operation actually wastes time on ARM because it's translated to a move instruction. But a shift+add can be done in one instruction (rather than 2) because the shift is done
Re: (Score:2)
SHR EAX,1
instead of
DIV EAX,2
Re:Few Questions for any programmers (Score:5, Informative)
Naive compiler translations can be functionally correct but sub-optimal with respect to runtime performance, memory/disk footprint etc. Compiler optimisation is the effort to make this translation as optimal as possible with respect to some variable(s) e.g. performance, size
What you are thinking of sound like source code optimization. There are various interpretations of this but to my mind, this means a combination of optimal algorithm selection and optimal algorithm implementation. Note that complex algorithms can be decomposed into smaller common algorithms e.g. a sort routine may be part of some higher-level algorithm, the sort-routine may be optimised independently of the higher-level routine.
Check out: http://en.wikipedia.org/wiki/Compiler_optimization
Re: (Score:2)
"compilers translate a higher-level language into a lower-level one"
Not always [google.com].
Actually, I'm fuzzy on which of Java or Javascript would be considered higher- or lower-level. It's not clear-cut, and could probably be considered more of a "sideways" shift than "downwards".
Re: (Score:2)
Porting is what humans do.
Google Web Toolkit contains a Java to Javascript compiler. It is the automated translation by a program of one computer language into another, just like translating C to assembly language, or assembly language to machine code, or Java to Java Bytecode, or Java Bytecode to machine code. All of the programs which do those translations are "compilers"[0]. A program which takes Java and spits out Javascript is no different. It's just another compiler, albeit with a very unusual target.
Re: (Score:2, Informative)
What things can a compiler do to your code to 'optimize' it for you?
The correct answer to this question is... it depends. No matter how advanced your compiler is it can't select the correct algorithm for you. If you're ordering your lists with a bubble sort instead of some kind of btree, there's nothing the compiler can do to help you except deliver the best O(n^2) sort it can. A truly artistic programmer can transcend all of the optimizations this compiler might achieve, by several orders of magnitude.
But if you're the kind of code geek that Microsoft hires, yeah, you
Re: (Score:2)
As a trivial example, one of the benchmarks I use for testing my Smalltalk compiler is a naive calculation of the Fibonacci sequence. With my first version, which did very little optimisation, it was much slower than GCC-compiled Objective-C (it's now about 50% slower). If, however, I compared a naive implementation in Objective-C to a more intelligent (O(n)) implementation in Smalltalk, the Smalltalk implementation was faster for all n greater than 30, and when you got closer to 40 it was several orders
Re: (Score:3)
Even faster is the closed form solution:
http://mathworld.wolfram.com/BinetsFibonacciNumberFormula.html [wolfram.com]
I for one? (Score:3, Funny)
Who would've guessed a compiler would become the first program to achieve sentience ;P
It will surely, er, program our programs to kill us.
Re:I for one? (Score:5, Funny)
It will surely, er, program our programs to kill us.
No, it'll just optimize out all emergency stop and safety routines. Humans inevitably die anyway so there is no point in slowing down the code to prevent it.
Re: (Score:3, Funny)
Humans inevitably die anyway so there is no point in slowing down the code to prevent it.
In fact, think of how much of an optimization that is! I mean, suppose people were killed by our robot overloads at 25. That's 1/3 of 75 years old; that's a 3x improvement in the speed we go through our life! In a world where a 20% improvement in speed for a new optimization is very impressive, 3x is just great!
John Connor?? (Score:1)
clang (Score:1)
Anyone with a deeper knowledge about IBM's offering know how it compares to clang [wikipedia.org], or whether there might be a synergy between the two?
Will it fix Crysis and Vangaurd? (Score:2, Funny)
So that the games run on a normal machine?
Re: (Score:2)
Will it fix Crysis and Vangaurd?
So that the games run on a normal machine?
Ah geez man, at least ask for something within the realm of possibility... like a release date for DNF.
Long Compile time - Long time to market ? (Score:2, Funny)
Re: (Score:2, Insightful)
Gentoo (Score:2)
So if I'd compile a Linux from scratch with this new compiler, everything speeds up by 18% on average? That would be quite impressive, and possibly the best justification for Gentoo. Might be nice for my aging notebook...
Re: (Score:1)
Try LFS.
Ricer? (Score:2, Funny)
Summary is extremely misleading (Score:4, Informative)
>The compiler is expected to significantly reduce time-to-market of new software,
>because lengthy manual optimization can now be carried out by the compiler.
The time to *make a new compiler* for a certain processor is reduced, and the
process of figuring which optimizations are should be in the compiler for that architecture
is automated.
This is for the kind of research where they attempt to make many specialized processors
on a single chip instead of a general monolithic one. In this case, you need many
compilers and tuning those is important. It's the time optimizing THOSE that is lowered,
not the one of writing the software that is compiled itself.
I see no real relevance to the "normal" desktop situation on that website.
Re: (Score:2)
>Actually the time to make a new compiler is
>reduced *and* the optimization performance of the
>compiler increased when compared to standard GCC
>(whichis what the 18% refers to).
Yes, 18% performance increase on an IBM p system running an embedded application benchmark. Ahem. Let's not talk about the compilation time, either.
It seems to be basically a very smart way to find the optimal combination of gcc optimization flags.
Now, how this will achieve:
"The compiler is expected to significantly reduc
Re: (Score:2)
The summary is complete bollocks.
How do you know? It seems entirely plausible to me that significantly better compiler optimization could reduce the level of manual optimization needed for embedded systems, and thus reduce the time-to-market.
The *summary* is bollocks because it doesn't mention the "embedded" part, nor does it mention that this is app seems to be really for the compiler *authors* not the compiler *users*. This is from their PDF doc:
Using MILEPOST GCC, after a few weeks training, we were able to learn a model that automatically improves the execution time of MiBench benchmark by 11% demonstrating the use of our machine learning based compiler.
A few WEEKS of training!?!?
More than likely, this could be used by the GCC folks to figure out the best default set of opts for a given -Ox level on a given arch by running it over some representative set of real-world code to find the best set of opts.
However, I share the GP's skepticism on this:
Better RAID controllers! Better routers! (Score:2)
I can see how this could lead to very fast, very cheap RAID controllers.
Also, imagine if those cheap little gigabit switches were actually 8-port gigabit Routers.
This is the sort of thing you can do with this technology.
"Lengthy manual optimization"? (Score:2)
> The compiler is expected to significantly reduce time-to-market of new software, because
> lengthy manual optimization can now be carried out by the compiler.
I always thought that testing and debugging were the lengthy manual steps. Oh. Wait. "Time to market". They're talking about proprietary software. Never mind.
Re: (Score:3, Insightful)
I always thought that testing and debugging were the lengthy manual steps
Not if you wrote the code well! ;-)
Seriously, as someone who's been doing this a long time (since '78, professionally since '82), and who is still at the top of his game, I nowadays spend *very* little time on debugging since it works first time - even the complicated multi-threaded, mutex type of stuff which is what I primarily write nowadays. After a while you stop making mistakes!
But, anyways, it seems the main target for this adapt
The longest journey... (Score:2)
* Q15. What is a SIMD unit?
A SIMD unit is a piece of hardware that does many things.
Re: (Score:1)
They kept hooking hardware into him--decision-action boxes to let him boss other computers, bank on bank of additional memories, more banks of associational neural nets, another tubful of twelve-digit random numbers, a greatly augmented temporary memory. Human brain has around ten-to-the-tenth neurons. By third year Mike had better than one and a half times that number of neuristors. And woke up.
Some logics get nervous breakdowns. Overloaded phone system behaves like frighte
Re: (Score:1)
Re: (Score:2)
Mycroft