Is Parallelism the New New Thing? 174
astwon sends us to a blog post by parallel computing pioneer Bill McColl speculating that, with the cooling of Web 2.0, parallelism may be a hot new area for entrepreneurs and investors. (Take with requisite salt grains as he is the founder of a Silicon Valley company in this area.) McColl suggests a few other upcoming "new things," such as Saas as an appliance and massive memory systems. Worth a read.
About time (Score:5, Funny)
Multithreading Is to Blame (Score:2, Insightful)
The reason is that all academic researchers jumped on the multithreading bandwagon as the basis for parallel computing. Unfortunately for them, they could never get it to work. They've been at it for over twenty years and they still can't make it work. Twenty years is an eternity in this business. You would think that after all this time, it would have occurred to at least one of those smart scientists that maybe, just maybe, multithreading is not the ans
Re: (Score:2, Informative)
Hyperbole much? Parallel systems such as MPI have been the staple of high performance computing since the mid 90's, and there are plenty of developers (including myself) who can write multi-threaded code without breaking into a sweat, and get it right.
At what point did parallel and concurrent programming "fail"? I really must have missed that memo.
Re:Multithreading Is to Blame (Score:4, Insightful)
Re: (Score:2)
A direct quote from the article. I guess, in a sense, this is true. It is true that it is impossible to ensure that a lock can be obtained. On a hardware level, no less.
On the other hand, this statement is a load of bullshit.
How utterly ridiculous (Score:2)
Now, are they going to make things faster in exactly the same way as clock increases? Hell no.
Do they take some thought to implement? Hell yes.
However they are here and have been here for decades.
This is unless you are referring to very, very narrow definition of parallel processing that is more applicable in the scientific than business or home forums - breaking down set, tricky, mathema
Re: (Score:2)
Re: (Score:2)
When I was in graduate school in the mid '90's I thought Parallelism would be the next big thing. Needless to say I was a bit early on that prediction.
Had a company in the early 90s dedicated to heterogeneous parallel computing in what we now call genomics and proteomics. Despite the ongoing boom in DNA sequencing and analysis, it was hard at the time to interest either end-users or (especially) investors in distributed processing. Most worried that it was overkill, or that the computations would somehow be out of their control. How times change...
Re: (Score:2)
Re:About time (Score:4, Insightful)
When I was in grad school back in the 1970s, people thought parallelism would be the next big thing, and it had some interesting technical challenges, so I got into it as much as was possible back then. Then I got out into the Real World [TM], where such ideas just got blank looks and "Let's move on" replies.
Some what later, in the 1980s, I worked on projects at several companies who thought that parallelism was the next big thing. That time around, I got my hands on a number of machines with hundreds of processors and gigabytes (Wow!) of memory, so I could actually try out some of the ideas from the previous decade. The main things that I learned was that 1) many of the concepts were viable, and 2) debugging in an environment where nothing is reproducible is hard. And I moved on, mostly to networking projects where you could actually do loosely-coupled multiprocessing (though management still gave you blank looks if you started talking in such terms).
Now we're getting personal computers with more than one processor. It's been a bit of a wait, but we even have management saying it's the "new^N thing". And debugging parallel code is still hard.
I'll predict that 1) We'll see a lot of commercial parallelized apps now, and 2) those apps will never be debugged, giving us flakiness that outshines the worst of what Microsoft sold back in the 1980s and 1990s. We'll still have the rush to market; developers will still be held to nonsensical schedules; debugging will still be treated as an "after-market" service; and we developers will still be looking for debugging tools that work for more than toy examples (and work with a customer's app that has been running for months when a show-stopper bug pops up).
There's a reason that, despite the existence of multi-process machines for several decades, we still have very little truly parallel code that works. Debugging the stuff is hard, mostly because bugs can rarely be reproduced.
(Of course, this won't prevent the flood of magical snake-oil tools that promise to solve the problem. There's a lot of money to be made there.
Re: (Score:3, Interesting)
It was a highly parallel machine. At every level. It could execute 10 instuctions at once. There was a 10 way path to main memory so we could do 10
Re:ACtually (Score:4, Funny)
Paul Otellini ?
I didn't know you posted on slashdot !
So what's up man ? Can I buy you a beer ?
Re:About time (Score:5, Informative)
Windows is not the only closed-source proprietary operating system out there. AIX and Solaris have supported parallel functions for a number of years, and various IBM mainframe operating systems have had those functions since the '70's. There are architectures which had it in the '60's.
Proprietary closed-source operating systems had these functions FIRST before Linux was a twinkle in Linus Torvalds's shorts.
Re:About time (Score:5, Funny)
Do not mock the shorts of Torvalds, for they are mighty indeed!
Always Was, Always WIll Be (Score:3, Funny)
Parallelism has had broad applicability in graphics. It's def
Let me know when I can buy (Score:3, Insightful)
Re: (Score:2)
32G chips and put them in 4 slots on my 64 Bit PC before talking about 'massive memory'
Can't you already do that with a server motherboard? Even if you're looking for a PC, Skulltrail supports gobs of RAM and 8 cores.
On the server side, Intel is coming out (soon) with Dunnington [wikipedia.org], which will be a 6-core single-die CPU with a monster cache... AND you can put 4 of them on a motherboard, giving you a 24-core machine. Then, you can also get custom workstations (Tyan?) that support multiple motherboards in a single box with a high speed interconnect. This is only going to get better when CSI/QPI [wikipedia.org] g
Re: (Score:3, Insightful)
Re: (Score:2)
"Massive Memory" refers to memory density where it's appropriate to use memory for applications that used to require a hard drive and then hard drives are used for long term storage only. This isn't meant for your daily PC. The current state of technology typically uses specialized rack-mount clusters of dozens or hundreds thin blades where each blade might have 16-64G of memory on standard 4 GB ECC Dimms with 8 memo
TRIPS (Score:4, Interesting)
Re: (Score:3, Interesting)
If each subsequent step is dependent on the pr
Re:TRIPS (Score:4, Interesting)
I guess my point is that I think we'll actually create the basic, expandable model fairly quickly. Would you agree that today's supercomputing, which utilizes parallelization on a scale far beyond desktop computing, has successfully harnessed parallelization? I hope you would. If so, then the next step is miniaturization of what supercomputing is already doing. That step is just now taking place. It's not something that will happen overnight, but I do think that after we've fully integrated parallelization into everyday computing we'll be back to the same old game again: that of looking for ever better ways to increase FLOPS through transistor/switch speed.
My basic thoughts on this are that it is, in theory, easier to model the perfect parallelization of a program, and the optimum number of cores for a specific type of computer, than it is to model the fastest possible clock speed of a CPU. Because of this we'll probably see diminishing returns in advancement of parallelization at an accelerated rate compared to CPU design and clock speed.
Re: (Score:3, Insightful)
I don't think "supercomputer" approaches (which I admit I think of as the same as "massively parallel scientific computing") are applicable to most applications today. Exceptions might be video, picture, and audio editing; and maybe certain types of database operations. If you have other examples I'd be most interested in hearing about them.
Re: (Score:2)
Multimedia and database applications are foremost in my mind, followed by applications which need to establish some form of intelligent interface with the user, such as the operating system GUI and games. These forms of processing are already receiving the multi-core parallelization treatment. Incorporating more cores is not that difficult, and it will b
1% of programmers (Score:5, Insightful)
This seems far, far too low. Admittedly I work in a place that does "parallel programming," but it still seems awfully low.
Re: (Score:2)
Re: (Score:2)
Perhaps they meant it as in "specifically designing to scale up", as opposed to a developer who just uses a thread to do some background processing.
One thing that's always saddened me is that most embarrassingly parallel problems like web and database development are still in the dark ages with this. They have so much potential, but almost nobody seems to care. To date the only frameworks I know of that allow fully asynchronous efficient processing of requests and database queries (so that you don't nee
Re: (Score:3, Interesting)
How do you see that as different from what Java J2EE does? Most J2EE servers these days use pools of threads to handle requests. These threads are then utilized based on poll/select APIs so that one thread can handle many requests depending on
Re: (Score:2)
How do you see that as different from what Java J2EE does? Most J2EE servers these days use pools of threads to handle requests. These threads are then utilized based on poll/select APIs so that one thread can handle many requests depending on the availability of data. Database connections are similarly pooled and reused, though any optimization on the blocking level would need to be done by the JDBC driver.
I've never used Java/J2EE before so I couldn't say.
.NET uses a per-operation callback to notify the app of I/O completion - it doesn't expose whatever internal mechanism it uses to achieve async, so the server choose the most efficient method (iocp, epoll, kqueue, how many threads, etc.). If you use SQL Server, DB queries can be done async too. If you do it right in many cases you can keep from ever blocking on I/O.
Re: (Score:2)
I think it might be worth a slashdot poll to see how many programmers have experience working with threads or MPI. My guess is a lot higher than 1%, especially given how prevalent Java is these days
Re: (Score:3, Interesting)
The entire hierarchy system in the IT fields has to deal with the painfully obvious fact that less than 1% of programmers know what they're doing: that is, in an ideal scenario, everyone would know what they're doing, and you'd have a FEW hardcore computer scientists to handle the nutso theoritical scenarios (most parallel programming for example can be done with only basic CS knowledge
Re: (Score:3, Insightful)
Consider that parallel computing means keeping extra monolitic cores busy. There are a number of programmers that need the discipline to know how to spawn, use, and tear down threads to keep them busy. But there are a helluva lot of them that plainly don't need to know. What we lack are reasonable compilers that allow the hardware layer to be sufficiently abstracted so that code can adapt to hardware infrasturcture appropriately. If that doesn't happen, then code becomes mach
Re: (Score:3, Insightful)
Not really. The structure of a large system still has to be defined by someone. The key difference is that the architect would get a lot more feedback from his team, and could possibly even farm out high-level pieces of the design to be further architected by other developers.
Re: (Score:2)
Note I said software architect, and I specifically stated I was not talking about system architects. Big difference between the two.
And yes application servers DO count. But during one connection, there's a LOT of things that can be sp
Re: (Score:2)
Not really. A software architect represents a division of labor between the guys who build the hardware and the guys who build the software. The software architect obviously deals with the software aspect of the system (and when I say system, I mean a complete, large-scale application) and is thus responsible for how the code will be organized and constructed. You simply can't
Re:1% of programmers (Score:5, Insightful)
I think your experience is wildly skewed toward the high end of programming skill. The percentage of working programmers who can't iterate over an array is probably in the 15-20% range, even without getting into whether "web programmers" are included in that statistic. I'd be astonished if the number with parallel experience is significantly above 1%.
Re:1% of programmers (Score:5, Insightful)
If you ask how many can "regularly achieve significant performance through use of multiple threads" then 0.1% is far too high. If you mean "can exchange data between a userland thread and an ISR in compliance with the needs of reliable parallel execution" then its a safe bet that less than 0.1% are mentally up to the challenge. /. readers are not typical of the programming cummiity. These days people who can drag-and-drop call themselves programmers. Poeple who can spell "l337" are one!
Re: (Score:2)
I was thinking more along the lines of "learned something about parallelism in a CS class and remember having done so, although not necessarily what it was".
Re: (Score:2)
If you ask how many can "regularly achieve significant performance through use of multiple threads" then 0.1% is far too high.
Add MPI or distributed parallel processing, and I would guess that number drops even lower. This is not a trivial topic, and for many years, just wasn't necessary. During the clock speed wars it was a non issue for most software, but with clock speeds capping out and core counts on the rise, I think we will see a large shift to this way of doing things in years to come. Most programmers follow the way of necessity, and that is what it will become.
Re: (Score:2)
Re: (Score:2)
The percentage of working programmers who can't iterate over an array is probably in the 15-20% range, even without getting into whether "web programmers" are included in that statistic.
As a "web programmer", I find that statistic really hard to swallow. I've never met any professional developer who couldn't iterate over an array. Do you have anything, even anecdotal evidence to support that?
Unless by "web programmer" you're including anyone who ever took a class on Dreamweaver.
Re: (Score:3, Funny)
Conveniently, the DailyWTF steps in [thedailywtf.com] to provide some anecdotal evidence:
In fact, I've encountered quite a few programmers (whom I don't hire, so don't blame me) who don't
Performance is the feature, Parallelism the means (Score:2)
evolution, not revolution (Score:5, Insightful)
Decade after decade, people keep trying to sell silver bullets for parallel computing: the perfect language, the perfect network, the perfect os, etc. Nothing ever wins big. Instead, there is a diversity of solutions for a diversity of problems, and progress is slow but steady.
Re: (Score:2)
I mean, yes, there are all kinds of solutions. Most of them are completely unused, and we're back to threads and locks. Nothing's going to be perfect, but I'll buy the "no silver bullet" when we actually have wide adoption of anything -- even multiple things -- other than threads and locks.
Re: (Score:2)
Re:evolution, not revolution (Score:4, Insightful)
The next problem is that parallelism is not, as a rule, CPU-bound but network-bound. All the libraries in the world won't work when the network clogs and chokes.
The third problem is that coders are taught serial methods. Parallel thinking is very different from serial thinking. You run into problems that do not exist in the serial world, even on a timeslicing system like Linux. True parallelism, like true clockless computing, is a nightmare to do well. You can't just shove another library in and hope things'll work.
The fourth problem is the level of connectedness. Globus is a great toolkit for some things, but you wouldn't use it for programming a vector computer or - most likely - even a Beowulf cluster. It's a gridding solution and a damn good one, but grids are all it will do well. On the flip-side, solutions like bproc and MOSIX are superb mechanisms for optimally using a fairly tight cluster, but you'd never sanely use them on a grid. The latencies would make the very features that make those solutions useful in a cluster useless on a WAN-based grid.
I'm not sure I'm keen on Java on any parallel solution other than gridding. It's too slow, its threading model is still in its infancy, and the sandboxing makes RDMA an absolute nightmare to do safely. In fact, the very definition of sandboxing is that external entities can't go around poking bits of data into memory, which is the entire essence of RDMA - CPUless networking.
Regardless, there are some things that C++ and Java simply cannot do well that other, parallel-specific languages like Pi-Occam can do with extreme ease and safety. It is possible to prove an Occam program is safe. You cannot do likewise with a C++ or Java program.
Parallelism isn't just about more threads on one CPU. In a totally generalized parallel scenario, there may be any number of threads - a few tens of thousands would not be unusual - running on systems that may be SMP, multi-core, multi-threaded, vectored, clockless, or any combination of the above, where those systems may be on a tightly-coupled or loosely-coupled cluster, and where the cluster may be homogenous or heterogeneous, SSI or multi-imaged, where memory may be local, NUMA or distributed, and where these systems/clusters may be gridded over wide-area networks that may or may not be reliable or operational at any given time, and where threads, processes and entire operating systems may migrate from system to system without user intervention or awareness on the part of the application.
The number of true parallel experts in the world probably number less than a dozen. No, I'm not one of them. I'm good, I understand the problem-space better than the average coder, but I've talked to some of the experts out there and they're as far beyond any traditional programmer as a traditional programmer is beyond the chipmunk. A network engineer might consider themselves OK if they can set up OSPF optimally across a traditional star network of star networks. Any traditional routing protocol over a mesh without getting flaps and maintaining a reasonable level of fault tolerence would be considered tough. A butterfly network, a torroidal network or a hypercube would leave said network engineer a gibbering wreck. Modern supercomputers do not take up buildings. Modern supercomputers take up a few rooms. The interconnects take up entire buildings. And the air conditioning on top systems can be measured in football stadia.
OpenMOSIX is largely dead, because it was impossible to reconcile those who wanted load-balancing with those who wanted HPC. It's not that they can't be reconciled in theory, it's that the mindsets are too different to cram into one brain.
If one solution could solve parallelism, the Transputer would be the only processor in use today and Intel would
Parallel computing pioneer likes Parallelism (Score:4, Insightful)
Shock! And Awe!
"Next hot thing" my hiney (Score:5, Insightful)
Because really, anybody believes Web-Two-Oh was anything but the regular web's natural evolution with a fancy name tacked on?
Re: (Score:3, Insightful)
Web 2.0 was a single name for an amorphous collection of technologies and philosophies. It was even worse than AJAX.
Parallelism is a pretty simple, well-defined problem, and an old one. That doesn't mean it can't turn into a buzzword, but I'm not convinced Web 2.0 can be anything but a buzzword.
Re: (Score:2)
Here's a clue for you "Ultra cheap computers" is the next big thing in tech, or haven't you heard about the impending financial crises that is about the consume the world's economies? That's right kiddies, no shiny new computers for your christmas... just new ISOs from Linux
Meh, can't blame him for trying to drum up business I guess
Re: (Score:2)
Seriously-- we've had enough computing power for the average desktop tasks for a long time. Instead of putting 8 CPUs on a die and bottling up all the processing power on the desktop, put 8 CPUs in 8 separate different domain-specific embedded devices sitting around you...
Re: (Score:2)
You should also be able to choose the number of processors you wish to have running so others can power down when not in use. Th
Re: (Score:2)
Multi-touch and parallelism are both the "next big thing," because multiple touches are touches in parallel!
; )
Re: (Score:2)
Re: (Score:2)
Web 2.0 is a definite set of "things" or "approaches" that "allow" you to (or possibly you "allow" them to) combine other "things" or "technologies" into a "newer" "-ish" mixture of "patterns" of "operation" of the "collection" of "computing" "resources" that "create" "value" beyond what "may" (or "may not") have been previously "achievable"
Got it?
More so now, but depends ... (Score:4, Insightful)
One thing that should also be noted, is that in certain cases you will need to accept increased memory usage, since you want to avoid tasks locking on resources that they don't really need to synchronise until the end of the work unit. In this case it may be cheaper to duplicate resources, do the work and then resynchronise at the end. Like everything it depends on the size and duration of the work unit.
Even if your application is not doing enough to warrant running its tasks in parallel, the operating system could benefit, so that applications don't suffer on sharing resources that don't need to be shared.
Re: (Score:3, Interesting)
Being able to simply say
Race conditions (Score:3, Insightful)
Mucking around with language design and implementation highlights some of the deep problems that the "parallize-everything" crowd often don't know about.
In your example, the loop can only be efficently parallized if it doesn't have any side-effects. If any variables are written to out of scope of the loop, then they are exposed to race conditions in both the read and wri
Re: (Score:2)
My point was, if you go through a typical everyday business app (since those are probably the most common kind developed these days, or close), you'll find that this situation is more common than not, which was my point. A LOT of loops contain operations without side effects and without shared ressources. Being able to easily handle those net you a large gain on the spot.
Basica
Re: (Score:2)
In a number of cases this means getting the API that you use optimised and doing less improvements yourself. For example your average Java developer could probably push a large part of the
Yeah, where are these massively parallel machines? (Score:2)
Two cores? Big whoop! Four cores? Haven't seen one, and our computing center keeps current with the latest generation desktop machines.
But factors of (almost) two or (almost) four speedup are no big deal in the grand scheme of things. Wake me up when they are talking about 10 cores, or 100 cores.
But that is the problem. Our resident computer architecture dude tells us that maybe 10 or 16 cores
Transistor Efficiency` (Score:2)
Re: (Score:2)
Didn't we have this debate last week? (Score:5, Informative)
And the conclusion?
It's been around for years numbnuts, in commercial and server applications, middle tiers, databases and a million and one other things worked on by serious software developers (i.e. not web programming dweebs).
Parallelism has been around for ages and has been used commercially for a couple of decades. Get over it.
Re: (Score:2)
Sorry, I totally got sick of using even that many buzz words. I'll stop now.
Re: (Score:3, Funny)
Re: (Score:2)
"You cannot put this type of thing into threaded design, & expect gains out of it... simply because B has to WAIT on the completion of A, first... no point in placing A or B onto diff. threads"
Oh sure, but where you have multiple threads doing seperate, non-codependant tasks you can parallelise really quite well.
Anyway, yes, my main
Please no (Score:3, Funny)
Coming soon: professional object-oriented XML-based AJAX-powered scalable five-nines high-availability multi-tier enterprise turnkey business solutions that convert visitors into customers, optimize cash flows, discover business logic and opportunities, and create synergy between their stupidity and their bank accounts - parallelized.
Most companies need parallel developers (Score:3, Informative)
McColl's blog is pretty interesting. He only recently started writing regularly again. High Scalability [highscalability.com] is another worthwhile resource in this area.
Re: (Score:3, Interesting)
Calling the latter architecture parallel computing is misnomer, it is really "simultaneous" computing i.e. things can happen at t
"the bastards say 'welcome'" (Score:5, Informative)
Here are a couple of lessons learned from that Ada experience:
1. Sometimes you want synchronization, and sometimes you want avoidance. Ada83 Tasking/Rendezvous provided synchronization, but was hard to use for avoidance. Ada95 added protected objects to handle avoidance.
2. In Ada83, aliasing by default was forbidden, which made it a lot easier for the compiler to reason about things like cache consistency. Ada95 added more pragmas, etc, to provide additional control on aliasing and atomic operations.
3. A lot of the early experience with concurrency and parallelism in Ada learned (usually the hard way) that there's a 'sweet spot' in the number of concurrent actions. Too many, and the machine bogs down in scheduling and synchronization. Too few, and you don't keep all of the processors busy. One of the interesting things that Karl Nyberg worked on in his Sun T1000 contest review was the tuning necessary to keep as many cores as possible running. (http://www.grebyn.com/t1000/ [grebyn.com] ) (Disclosure: I don't work for Grebyn, but I do have an account on grebyn.com as a legacy of the old days when they were in the ISP business in the '80s, and Karl is an old friend of very long standing....)
All this reminds me of a story from Tracy Kidder's Soul of a New Machine http://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine [wikipedia.org]. There was an article in the trade press pointing to an IBM minicomputer, with the title "IBM legitimizes minicomputers". Data General proposed (or ran, I forget which) an ad that built on that article, saying "The bastards say, 'welcome' ".
dave
Re: (Score:2)
"...with the cooling of Web 2.0,..." (Score:5, Insightful)
Re: (Score:2)
Will "Parallelism" be the next new thing? No... (Score:2)
I say this as someone who has recently been tuning his career, experience, and personal projects towards learning more about parallel programming in practice, and I still don't see this as a "next big thing". It's just another in a long
Total Tripe (Score:2)
While not related to parallelism I especially like "SaaS as an Appliance. One area within SaaS that is growing quickly is the opportunity to deliver a SaaS product as an appliance."
So you mean to tell me that the next big thing is insta
Flavor of the month? (Score:2)
It really sounds like he is shilling.
But seriously, first it was DP and COBOL.
Then expert systems.
Then relational databases.
The object orientation.
Then webification.
Then XMLification.
Then web 2.0.
And probably a few I missed.
I think this is just another fad.
Re: (Score:2)
Re: (Score:2)
Man emerges from cave... and states the obvious. (Score:2)
What am I missing? (Score:4, Insightful)
Re: (Score:2)
Historically software development has been lazy (with a few notable exceptions) and sat back relying on new silicon (EE's, Moore's Law, higher clock rates) to improve performance. But in the future that may change. Breaking your software up into parallel tasks maybe required to get performance benefits from new silicon.
Dont believe? Intel & MS have made a $20M bet (Score:2, Informative)
You think that nobody has a real interest in parallel computing? Intel's put their money on it already - they've allotted $20 million between UC Berkeley [berkeley.edu] and University of Illinois [uiuc.edu] to research parallel computing, both in hardware and software.
I am a EECS student at Cal right now and I have heard talks by the UC Berkeley PARLab [berkeley.edu] professors (Krste Asanovic and David Patterson, the man who brought us RAID and RISC), and all of them say that the computing industry is going to radically change unless we figure
The three kinds of parallelism that work (Score:2)
We know three kinds of parallelism that work: clusters, shared memory multiprocessors, and graphics processors. Many other ideas have been tried, from hypercubes to SIMD machines, but none have been big successes. The most exotic parallel machine ever to reach volume production is the Cell, and that's not looking like a big win.
Graphics processors are the biggest recent success. They're still very difficult to program. We need new languages. C and C++ have the built-in assumption that all pointers poi
Re: (Score:2)
You've got two choices if you want to run lots of stuff in parallel. You can do it very easily and live with the restrictions, a la GPUs or simple coarse grained cluster stuff. Or you can have a lot more fle
Re: (Score:2)
The Cell isn't really an exotic parallel machine. It's a regular multiprocessor/multicore machine (like a ten year old desktop Mac) except that some of those processors are special purpose.
No, it's a non-shared-memory multiprocessor with limited memory (256K) per CPU. It belongs to roughly the same family as the nCube, although the Cell has a block DMA-like path to main memory rather than relying entirely on CPU to CPU data paths like the nCube.
It's typically used like a DSP farm; data is pumped throu
Then we are all doomed (Score:3, Interesting)
Re: (Score:2)
After Parallelism, the next big thing is... (Score:2)
Seriously, didn't we decide that parallel programming was the next big thing when Sutter wrote a big article in Dr. Dobbs a couple of years ago?
Welcome to the party pal, we've been here a while already!
Software as a Service != Parallelism (Score:2)
Honestly I think that parallelism and SaaS are pretty much on the opposite sides of the spectrum. Your typical SaaS application requires no parallelism whatsoever since they are typically low-impact programs. The only real improvement over ordinary software is that you don't have to install it, don't have to maintain it and that you can access it anytime from anywhere.
A typical SaaS provid
Re: (Score:2)
A typical SaaS provider has a few dozen to a few thousand servers running a few hundred to a few million instances of his software. Since typically a single server will run many instances of the software, parallelization will "just happen" for free.
If that's not massively parallel, I don't know what is
Granted that my only experience with parallel programming was a class in college where the professor did number crunching in Fortran, I don't think a web server really qualifies as a "true" parallel system. A web server just has to send off requests from a queue into new processes/threads and let them go. There are no synchronization, locking, or communication issues to worry about. Threads in a web server might as well be entirely separate programs; they aren't the kind of repeated computation and aggrega
an 8000 node cluster is a parallel supercomputer (Score:2)
We've seen unambiguously that **GIGANTIC** data sets have their own value. Google's optimization of their algorithms clearly uses enormous amounts of observed user behavior. Translation efforts with terabyte source cannons. Image integration algorithms like that thing that Microsoft were demonstrating recently... gigantic data sets have power because statistics draw relationships out of the real world, rather
Dynamic Execution Backgound (Score:2)
The HPC Cluster people have thought about this stuff for a while. One approach that I have thought about is described in the article:Cluster Programming: You Can't Always Get What You Want [clustermonkey.net] There are two follow-on articles as well Cluster Programming: The Ignorance is Bliss Approach" [clustermonkey.net] and Cluster Programming: Explicit Implications of Cluster Computing [clustermonkey.net].
Of course if you really want to know how I feel about this: How The GPL Can Save Your Ass [linux-mag.com]
enjoy
Again, enough already!!! (Score:2)
Granted, yes, certain products could benefit by extreme threading, i.e. like PostgreSQL breaking the hierarchy of query steps into separate threads and running them in parallel, like doing a more exhaustive search for the query planner using multiple threads, and stuff like that, but there is always going to be the competition between performa
Twenty-plus years on... (Score:2)
Forgive me for not gett
What's web 2.0? parrallelism new? (Score:2)
With compuserver (1969), BBS's (1970's), UseNet, E-mail & The Source (1979), The Well & Q-link (1985) we have online communities this whole time. With IRC we have been IM Chatting since 1988.
And almost all of this existed over the IP based Internet starting around 1983 and starting in 1993 it became http/browser based. I have been using all of these early after their inception.
So what is new about Web 2.0? I can't see
1980s: Thing Machines, MassPar (Score:2)
I've got your parallelism right here... (Score:2)
Sure, it can be a bit tiresome to edit my blogroll in XEDIT, but the parallelism... woooosh! My AJAX just *flies* out of that VTAM, Beowulf clusters ain't got *nothing*on me!
I do NOT look forward to our parallel overlords (Score:2)
When the only type of tool we have is massively parallel systems, what kind of problems do you think we will apply that tool to, and what kind of problems do you think we will start ignoring? I would rather see us tackle both kinds of problems.
I suspect, however, we will end up with "the contact lens effect", where someone loses their context lens in a dark alley, then looks for it under a streelight "because the light is better over here".
-- Terry