Supercruncher Applications 58
starheight writes "Bill McColl has written an article contrasting traditional massively parallel supercomputing with a whole new generation of compute-intensive apps that require massively scalable architectures and can deliver both incredible throughput and real-time responsivenes when processing millions or billions of tasks."
Yay!!! (Score:5, Funny)
Re: (Score:1)
Example App: (Score:4, Funny)
Wow (Score:5, Funny)
The real question.. (Score:1)
Re: (Score:1)
Modders, feel free to mod me off topic
Re: (Score:1)
Re: (Score:2, Informative)
What about how the design scales? (Score:4, Informative)
The term "massively parallel" indicates a system operating without those constraint.
Re: Amdahl's Law (Score:3, Interesting)
Re: (Score:2)
Either that, or your imagination is lacking somewhat. Personally, I've wanted lots of cores sinces I was in kindergarten. I'm quite sure I can find a use for them all.
Re: (Score:3, Interesting)
What? You are on drugs, yes? And not the good kind?
What about video encoding? Besides codec parallelism, you can also parallelize the distance between two keyframes, handing that chunk off to a core (or node) for processing. This is very mass-market - more and more people want to make snazzy home movies.
In fact, far more people would like to do this than render 3d movies.
Re: (Score:2, Informative)
100 cores is not massively parallel. The kind of scaling we're talking about is much higher. Think thousands of cores each with hundreds of threads.
This is the kind of scaling that weather centers are just starting to reach today. It's the kind of scaling that will require a radical rethinking of how consumer software is designed and what tools we need to make that design process easier.
In this world, software is king. You won't care who your chip vendor is. You'll care who provides your compiler, de
Re: (Score:3, Interesting)
Amdahl's law isn't really a problem, its just a thing. The law of gravity is not a problem, its just a thing.
Supercomputing is really cool with embarasingly parallel problems and things like superlinear speedup. Supercomputing is a mess because its basically a hack. Funding and support are always issues. Even though we buy thousands of CPUs at a time, they are still a blip on the radar compared to regular server sales, and vendors don't cater to supercomputing because ironically, its not much of a marke
Re: (Score:2)
I think he was on to something fundamental about problem organisation, myself -- why else would large, otherwise healthy functioning companies end up with nine coxwains per rower?
Re:Massively parallel?! (Score:5, Insightful)
Sure there are. Say you want to find the maximum of 4 integers. You can do that in parallel, but you won't gain much if you have more than two processors (or execution units). Contrast this with say rendering an image using a path tracer, where each ray is independent of each other. First problem is hard to scale up, second one isn't. I'd say that means that ray tracing is a "more parallel" task.
Also, writing algorithms that has to run on 10000 processors efficiently is not exactly the same as one that has to run on 4 processors, in the same way that writing a multiplayer game that handles four players isn't the same as writing one that can handle thousands of concurrent players. So they toss on the "massive" part to separate the cases. At least that's my take on it.
Re: (Score:1)
i.e. [these algorithim's are] massively parallel scalable.
buzzwords help too.
Re: (Score:1)
Identifying problems that are well suited for a multi-processor platform can be quantified. It's hard to scale up when you define it as 4 integers. Try finding the maximum of n integers.
aArray[1..n]
int parallelMAx(ar
Re: (Score:2)
Re: (Score:2, Informative)
But some problems can use massively parallel computers, designed to solve embarassingly parallel problems.
Re: (Score:3, Interesting)
Re:Massively parallel?! (Score:4, Interesting)
Auto scaling (Score:1)
Overall the research project covered scaling from the CPU/core through cache to DRAM to disk to network even up to the point of when you'd have to actually scale the dispatch
Wow, can you imagine a Beowulf cluster of these? (Score:4, Interesting)
Re: (Score:1)
Re:Wow, can you imagine a Beowulf cluster of these (Score:1)
Re:Wow, can you imagine a Beowulf cluster of these (Score:2)
In fraud detection, thinking of the credit card companies, it's typically looking for patterns after the transaction has already gone through, and if
Re: (Score:2)
Prognosticating (Score:3, Interesting)
Re: (Score:1)
Right ... know where we are ... for traffic ... (Score:1)
Slow news day, huh? (Score:4, Insightful)
Can we please have a "no links to random, boring blogs week" on Slashdot?
Re: (Score:2)
Re: (Score:2)
However, this guy's blog might have been taken from a couple of high-school stoners. (Supercomputers for weather? Who would have thunk it?) There's really no insight on this blog; there's nothing that the average geek wouldn't be able to rattle off in five seconds without really thinking.
I know - this blogger is quite "average"
that paragraph (Score:3, Funny)
bah weep grana weep minibom
The Folding@home SMP client is ready. (Score:2, Interesting)
http://forum.folding-community.org/fpost166684.htm l#166684 [folding-community.org]
http://fahwiki.net/index.php/SMP_client [fahwiki.net]
Leeeerrroooooy Jennnnkkinnnnsss!!! (Score:3, Funny)
"We have a 32.33, repeating, of course, percent chance of survival."
"That's better than we usually do."
Redundancy? (Score:1)
# Sparse linear algebra
What about Average linear algebra?
# Structured grids
# Unstructured grids
Are there any other types?
(** Warning: Car analogy...)
Isn't that kind of like selling a car and listing on the spec sheet:
# Goes slow
# Goes fast
Re: (Score:3, Informative)
For sparse matrices, you can use algorithms that are vastly more efficient than the algorithms you otherwise would use for non-sparce matrices of the same size. This is called sparse linear algebra. If you can't use the algorithms for sparse linear algebra, it doesn't matter whether you call it "dense", "average", "standard", "normal", "regular", "common", or what the fuck else.
Bill McColl was my thesis supervisor at Oxford (Score:1, Interesting)
Last time we spoke, Bill said that he was interested in the issue of massively scaled computers that can handle fault tolerance pre-emptively. He compared today's supercom
Why do I want multiple cores? (Score:2)
At one point you have the app running on a core, the OS on one, the graphics on the GPU, the network on a cpu. You get lower latency because your app's cpu doesn't have to time slice with the others.
I can see parallel makes, conversion (wav2mp3, video formats etc), formatting (commercial skipping, panorama stitching). I/O is going to be the ultimate bottleneck.
What kind of consumer applications would benefit?
Re: (Score:2, Interesting)
Some off-the-wall future consumer things to consider:
Errors (Score:3, Insightful)
None of those offer or require real-time guarantees.
Google Alerts is here now.
A better article would have started with the table that defines "supercruncher" and proceeded to describe the architectural issues of building one. Ideally it would have addressed the software challenges.
The Weightless Economy in disguise (Score:3, Informative)
The supercomputer function is still the same. It still solves algebra, n-body methods, structured grids, and finite state machines. The user of the supercomputer is different. The user is now living on $1 a day in Mongolia.
For the wealthiest countries to stay wealthy, they have to focus on not the computing part but marketing the computing, creating the interface to the math, managing the business around the computing.
Re: (Score:1)
"Real-time" only in the casual non-technical sense (Score:1)