Distributed Compilation, a Programmer's Delight 60
cyberpead writes in with a Developerworks article on the open source tool options that can help speed up your build process by distributing the process across multiple machines in a local area network.
What about Excuse #1? (Score:5, Funny)
Sorry - compiling [xkcd.com]
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:What about Excuse #1? (Score:4, Insightful)
That would allow for people to inject malware, wouldn't it?
To compile:
void printhello() {
printf("Hello world!\n");
}
evil bastard changes to:
void printhello() {
{
}
printf("Hello world\n");
}
Since the most practical way to spot the evil binary would be to compile the code yourself and compare, that sort of defeats the purpose of having someone else compile it. I guess you could have many random people compile the same piece of source-code and then compare all produced code, but that makes the whole thing rather complicated.
Also, the p2p thing would only be useful for open source, as I doubt it would be smart for people trying to produce some closed source product to send their source to a p2p network that may or may not store everything.
And this is all assuming the delays introduced by sending all this stuff over the internet are not so large that compiling locally is faster or almost as fast.
It's probably best to compile your stuff on your lan, on machines that are close, and that can be trusted.
Re: (Score:2)
If you are that paranoid only use a farm where you have control over all the machines.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
When I first used Gentoo several years ago it was with a 950Mhz Athlon CPU and it wasn't too bad.
Now, with quad-core CPU's becomming the norm even in Desktop machines, the compiling thing is even less of an issue.
I was able to run software on Gentoo that I could never get to run well together on any other distribution. You can almost always get the latest and greatest versions of everything with Gentoo. With Kernels taking almost no t
Re: (Score:3, Informative)
Re: (Score:1)
Re: (Score:2, Insightful)
The longest OpenOffice compile I've ever done was something around 5 hours, and that was with the system doing other stuff on the side. Distcc et al reduce the compile time to around 2h.
Re: (Score:2)
If I recall correctly, OpenOffice was one such package.
Gentoo isn't that masochistic.
Re: (Score:1)
Dilbert version is funnier (Score:2)
Tried to find it, but couldn't. It goes something like this:
Panel 1: PHB, walking by Dilbert's cube: Dilbert, why aren't you working?
Panel 2: Dilbert: My programs are compiling.
Panel 3: PHB, sitting back at his desk by himself, thought bubble: I wonder if my programs compile.
Re: (Score:1)
Bulk building is more effective (Score:4, Informative)
Due to a strange quirk in the way compilers are designed, it's (MUCH) faster to build a dozen files that include every file in your project than to build thousands of files.
Once build times are down to 5 - 15 minutes you don't need distributed compiling. The link step is typically the most expensive anyway, so distributed compiling doesn't get you much.
Re: (Score:1)
Re: (Score:2)
Or link-time code generation if you have that level of optimization turned on.
Preprocessing in C (Score:5, Informative)
I guess you are refering to the preprocessing step of C and C++ compilers, which was really a lame hack, I think. If you have a lot of include files, preprocessing produces large intermediate files, which contain a lot of overlapping code, that has to be compiled over and over again.
Preprocessing should have been removed a long time ago, but nasty backwards compatability issue, it was never done. Other languages, such as Java and D, solve this problem in a much better way. Just as did TurboPascal with its TPU files in the late 1980's.
Re: (Score:1)
Compiling each CPP in turn requires 10 - 100 files read off the disk each time.
Modern operating systems get around this issue with a disk cache. In reality, 100 files will be read off the disk for the first compile, and the rest of the compiles will just access the cached copy in memory (unless memory is in short supply on your system).
Re: (Score:2, Informative)
It's not sufficient for large projects; disk I/O is still a very large overhead when compiling. Switching to a 'Unity' build scheme reduced compile times significantly (more-so than the distributed compile solution we used since it still had to read the files off disk multiple times in addition to sending them over the wire to multiple machines). .CPPs and .Hs make up about 110mb on our project.
Re: (Score:2)
Due to a strange quirk in the way compilers are designed, it's (MUCH) faster to build a dozen files that include every file in your project than to build thousands of files.
True of Visual C++, not true of any other compiler I'm familiar with.
Re: (Score:2)
But your code will be harder to understand. You're giving up a lot of tools, like static globals in C and anonymous namespaces in C++.
Every time I have encountered painfully long compile times, the cause has been sloppiness. Usually, the direct cause is t
Very Cool (Score:2)
Is this new? (Score:3, Insightful)
Article summary: use 'make -j', 'distcc' and 'ccache' or something combination of these. These utilities are well known and widely used already, no?
Re: (Score:2)
Re: (Score:2)
Yeah, I was wondering the same thing. distcc and ccache has been a staple of Gentoo users since forever.
Minor error (Score:5, Informative)
Re: (Score:3, Insightful)
That implies you read the article, but that can't be the case.
Re: (Score:3, Informative)
Maybe there's some special cases, but I've never had to have a shared source repository in order to use distcc.
They also say the machines need to be exactly the same configuration, and they do elaborate on that a little bit, but it's not strictly true. Depending on the source you're compiling, you might only need to just have the same major version of GCC.
Re: (Score:3, Informative)
Re: (Score:1)
Gentoo machines were using gcc 4.1.x i586, Knoppix had gcc 4.1.x i386. All 32-bit. The resulting build was lightning fast and error-free. This was an app not the kernel.
Re: (Score:3, Insightful)
You didn't read my post or you have low comprehension skills. I said "Depending on the source you're compiling."
A Kernel build might require specific libraries to be the same version. Building Firefox might not. Some apps you can build on Linux and use a cygwin box running distcc to help. Others you cannot.
It's
In other news (Score:5, Funny)
Slashdot readership plummets to an all-time low as programmers actually have to work.
Re: (Score:2)
> Slashdot readership plummets to an all-time low as programmers actually have to work.
Not at Sun [cnn.com], though... yikes.
Re: (Score:2)
Ouch!
raggle fraggle (Score:3, Funny)
distcc deliiiiiiiight.
Re: (Score:2)
Thought thief!
Re:distcc has one fatal flaw (Score:4, Informative)
In pump mode, distcc runs the preprocessor remotely too. To do so, the preprocessor must have access to all the files that it would have accessed if had been running locally. In pump mode, therefore, distcc gathers all of the recursively included headers, except the ones that are default system headers, and sends them along with the source file to the compilation server.
We can't take TFA's author's adivce! (Score:2)
He's using TCSH! That's BAD FOR YOU! [faqs.org]
Ok, enough offtopic. This is actually pretty cool, considering our development environment is clusters and clusters of IBM P-Series LPARS, and our codebase is (A) disgustingly huge, and (B) actually pretty amenable to parallelized make.
FINALLY, I can justify to my boss that browsing /. is research! (Now if I could just make a good case for 4chan...)
SMEs (Score:3)
The reason for a lot of build machines in the rack may not be horsepower but rather you need x different machine versions, or a certain build only builds on a certain machine because of licence restrictions or you may only have one windows box with the Japanese character set installed because it causes so many problems that multiplying the problems just isn't worth it and so on and so forth. Building across n number of the same machine version just isn't worth the work IMO. Just get a bigger machine and save on the machine maintenance.
So the real benefit of distcc might be parallel compilation; I see a big future for this, particularly with the chipsets becoming commonplace. Once upon a time, I would not countenance a dual-chip machine in the rack because of the indeterminate mayhem it would sometimes cause to a random piece of code deep in the bowels. Those problems are well gone.
Umm. I wonder how this plays out how with VMWARE? A distributed compiler smart enough to use the (correct) local compiler across a varied build set would be worth having ...
No distcc hints... (Score:1)
Icecream. (Score:2, Interesting)
It's similar to distcc, but with some notable benefits.
'native' breaks distcc (Score:2)
If you want to distribute compilations, you must not use the 'native' gcc option, it will cause the compiler instances to emit objects in the native format of the compiler invoked and your compilation hosts may not all be identical.
I've found a better solution altogether (Score:1)
It's called an "interpreter".
(Cue flamewar in 3...2...1...