Weta Digital Grows Cluster 209
Korgan writes "A little over 3 years after their last upgrade, Weta Digital has just added another 250 more blade servers to their render farm to help with the final renderings of King Kong. From the article: "The IBM Xeon blade servers, each with two 3.4 gigahertz processors and 8 gigabytes of memory, are housed at the New Zealand Supercomputing Centre in central Wellington. They have been added to the centre's existing bank of 1144 Intel 2.8GHz processors, boosting its power by 50 per cent to create a supercomputer with the equivalent power of nearly 15,000 PCs. The servers run the Red Hat version of the open-source Linux operating system. The purchase means the centre is back among the 100 largest supercomputing clusters in the world." And all that computing power is still available for hire when Peter Jackson isn't using it."
Export restrictions? (Score:5, Interesting)
I know that historically, NeXT did quite a bit of work for TLA agencies and that Richard Crandall's program, zilla.app grabbed some attention from interested parties. Because of this work, NeXT had some cash infusion for their hardware even after shutting the line down for general commercial consumption. More recently, Apple has been selling Xserves to some of those same agencies, and contractors for work, but I do not know if they are selling any clusters outside the US?
The history of course behind this law was that the CIA and NSA were concerned that foreign governments could use compute time to help design nuclear weapons as well as defeat cryptography that might compromise US secrets.
* sigh * (Score:1, Interesting)
Am I the only one who prefers models and stop motion animation to the CGI garbage of the last 15 years?
Re:Blah (Score:3, Interesting)
Re:Export restrictions? (Score:3, Interesting)
More interestingly, can anyone see digital actors quickly surpassing their organic cousins, no matter what Peter Jackson says [kongisking.net]?
And slightly more interestingly, when will New Zealand surpass California in flim making, it is the ideal location, with better light, more interesting geography, and (at the moment) far cheaper to work in. There are of course the problems with the remoteness of the location, but with the rapidly shrinking world cliché, this is surely no longer such a problem, especially with the work Mr. Jackson is putting in regarding the logistics.
Distributed computing... (Score:2, Interesting)
Given the high degree of parallelism and the social aspects, you'd think that distributed computing would be ideal for hollywood rendering, given that you could implement sufficient security restrictions. (Security restrictions which should be perfectly managable.) How many people out there do you think would like to be able to say "I rendered part of this movie!"
There are some issues, of course, but it strikes me as worth exploring.
Re:Nonsense Statement (Score:3, Interesting)
The lines have blurred due to clusters. My definition is "a collection of hardware that provides a non-trivial level of performance on a single problem" Of course, "non trivial" has various interpretations. And, working toward solving a single problem is important. Rendering is a trivial parallel application as it is really a bunch of small independent problems. Most supercomputer applications would probably run "sub-optimal" on this system (I assume it has GigE as an interconnect) because they require much more processor to processor communication. BTW, I run the ClusterMonkey [clustermonkey.net] site that talks about clusters and HPC if you want to learn more about clusters.
Re:Explain this "new" math to me... (Score:3, Interesting)
You have to understands this is rendering and not actual tasks of running a multi-threaded desktop environment. If they were using something like Maya3d or their own inhouse app... The answer is yes way.
When you render to 3d it uses all of the cpu and every cpu you have and every register on the cpu and cache if the rendering software is up to snuff. So what you are looking for is raw computer horsepower. Each cpu can effectively reduce your render time by half (this is in theory because if one scene has more detail/polygons than another than the cpu that is given those frames to render will take longer, but most of the time the quality is the same), but with high end rendering you are looking as massive amounts of time spent rendering depending on how many frames and how high of quality (resolution) you are shooting for.
Although I am only familiar with low end Maya3d setups. My hunch is they have customized their OS to only run minimal os and the maya rendering farm software. If they are using Maya3d or something equivalent it will take advantage of all cpu registers and cache and what not. If you have xeons with large amounts of cache on the cpu then you will see a benefit with rendering more so than just a regular p4. However, the P4 can usually outperform the Xeon when you need something with mult-threads like running a video game or an OS with a GUI.
Re:power draw (Score:4, Interesting)
What kind of power draw do the blade chassis have? What blades? What version of Red Hat?!?!?!
Unfortunately TFA is very short on details and reads more like "Peter Jackson went out and bought 500 computers! Woo!"
OUCH!!! (Score:1, Interesting)
Re:xeons? (Score:3, Interesting)