Time to Get Good At Functional Programming? 620
prone2tech writes "From an article at Dr. Dobb's: Chipmakers have essentially said that the job of enforcing Moore's Law is now a software problem. They will concentrate on putting more and more cores on a die, and it's up to the software industry to recraft software to take advantage of the parallel-processing capabilities of the new chips. As is argued in this article, this means becoming proficient in parallel functional programming. The bad news? Getting good at functional programming is hard, harder than moving from iterative Pascal or Basic or C coding to object-oriented development. It's an exaggeration but a useful one: When you move to FP, all your algorithms break.'"
Convince your boss. (Score:5, Funny)
You mean oo isn't the only option?
Re:Convince your boss. (Score:5, Interesting)
Another question you might ask yourself is, are you going to let the CPU designers push you into a programming paradigm you are not effective in? Personally, I can see a machine being quite useful with say, 16 or 32 cores just because these machines do more than one thing at a time. But I'd much rather see them speed the cores up than endlessly multiply the number of them. There is a *lot* of room left to do this. Three D architectures offer more connectivity than is currently being used, and both the number and type of one-cycle instructions within a CPU can be increased until the summary is "all of 'em", which I doubt they're going to ever get to, orthogonality can be increased until again, the answer is that all instructions are available to the same degree for all registers and addressing modes no matter what. Compilers like broad orthogonality (and so do assembly programmers, not that there are a lot of us left.)
If CPU designers run off towards the horizon making many-core designs, what if no significant number of people follow them there? Which... frankly... pretty much seems to be the case. I've an 8-core machine, and just about the only things that actually use those cores together are the easy ones: graphics and waveform encoding/decoding. Aperture sure enough uses all my cores in a well-balanced fashion and can build a JPEG in a snap; but that's a far cry from my web browser doing the same thing while trying to render a page.
I'm just saying that the direction the CPU folks have chosen lately doesn't have to be the direction we actually end up going, and there are points in favor of this as the best choice.
Just some thoughts.
Re:Convince your boss. (Score:5, Insightful)
Well, the problem is that no matter how much you bash an algorithm with a functional language you can't magically make a sequential task into a parallel one.
Re:Convince your boss. (Score:5, Insightful)
Thing is, you probably have a parallel task that was already bashed into a sequential one.
Most real-world problems are parallel problems. Even the ones that aren't (say... compiling a file in C) you can usually run a lot of instances of in parallel.
Re:Convince your boss. (Score:4, Interesting)
More CPU horsepower is the obvious choice for non-bus limited operations, but things are starting to get expensive there, so I welcome a few extra cores. The real question is if Intel and AMD will save some cash from their MMC(massive multi-core) projects and deliver a more sensible number of faster cores. You just can't depend on a given user's programs being set up to run efficiently in parallel.
Re:Convince your boss. (Score:5, Insightful)
I'd much rather have 64 fast cores than 16 slightly faster (but horribly power-inefficient) cores, and that's really the tradeoff that you're talking about. All of the reasonable ways of throwing transistors at getting faster straight-line code execution have already happened. Hell, even the unreasonable ones have been implemented, like fast 64-bit division units.
Intel and AMD have the choice of releasing dual-core processors that are 5-10% faster than last years, or they can release 4/6 core processors for about the same transistor budget. The multi-core processors are better for almost everyone - there's no way to get a 5x speedup out of a 10% faster processor.
"everything that has been invented will be..." (Score:4, Insightful)
All of the reasonable ways of throwing transistors at getting faster straight-line code execution have already happened. Hell, even the unreasonable ones have been implemented, like fast 64-bit division units.
You, and the chipmakers, have apparently become stale.
There have been claims like this made throughout history. The patent office was closed because of this, bill gates once declared a maximum filesize we'd ever need, and the list goes on and on.
If today's major chipmakers are too lazy and uncreative to come up with new ideas, then academics and entrepreneurs will come and eat their lunch.
Re:"everything that has been invented will be..." (Score:4, Interesting)
Sorry, I didn't mean to imply that we had come to the end of invention, even in a small area.
But all of the effective techniques *that we know about* have been implemented, and the chip makers have been banging their heads on the wall for years trying to come up with new ones. They went to multi-core processors as an absolute last resort, and 5 years later it's probably time for most programmers to accept that and learn to target multi-core platforms.
Re:Convince your boss. (Score:3, Informative)
Realistically, most code that will scale to 16 cores should scale to 64 cores. The real problem is code for 1/2 cores not scaling to 8+ cores. Once 8+ cores are common, this problem will quietly vanish as no-one clings to the 1/2 core solutions anymore. You can already see this to a large extent in server applications.
Re:Convince your boss. (Score:3, Interesting)
You don't have to be CPU-bound to benefit from extra cores. More cores means less OS-driven task switching, since all of the active processes can be isolated. The result is (slightly) lower latency for everything involved.
Plus, you're underestimating the obvious parallel processes on a machine. Right now I have around 50 Firefox tabs open. I'm struggling to close them fast enough (I tend to hang onto amazon.com pages and whatnot), but what if I was using a browser that separated that into 50 separate processes (instead of the Firefox approach, which is to hang whenever I open too many Youtube windows)? They're all periodically vying for CPU time when their Ajaxified Web 2.0 Timer goes off and re-downloads crap.
Re:Convince your boss. (Score:5, Insightful)
The way I look at it is that we are resigned to do only certain things with a computer since, up until now, the computers we have created are only good at a certain class of problems. They suck donkey balls on most of other interesting things that are immensely useful. Take optimization problems - there is an insane amount of applications that we currently don't think of since, like i said before, we've resigned our hopes in being able to tackle those.
For example, I would love to get parallel computations figure out my 'optimal' tax returns. Have my GPS calculate optimal routes - the routes I get now are pretty crappy etc.
My point to all this is that most of the problems that look like they are one-input-one-output aren't really that. It's just that over the last 50 or so years, we've learned to model them as such out of sheer necessity.
Re:Convince your boss. (Score:3, Insightful)
My thoughts exactly. I can think of a multitude of possible applications that we have yet to tackle simply single core processors were not up to the task of computing the large data sets efficiently.
So many applications have been relegated to high-performance computing. Such as weather prediction, 3d rendering, chaos math, simulation, etc. Software has been been outpaced to what hardware is capable of (games notwithstanding) for some time now. Even this single core athlong64 3000 i'm using is about 100x faster than the 486 I used to think was blazing, and yet, the best we have is new versions of the same programs with layer upon layer of feature creep added in to the point where Word on the 486 ran about as well as Word on the mini super computer.It amazes me how many applications aren't multithreaded. Even on a single core, you could at least still work on something while the program also executed some task or job in the background. So man applications force you to just sit and wait on some infuriating task bar.
We need more multithreading period.
Re:Convince your boss. (Score:3, Interesting)
And for what it's worth, evolution algorithms and genetic algorithms are going to be all over the place in 5 or 10 years, because they can handle 1000 or 10,000 cores without any problems. They're sort of like 'meta-algorithms' for parallelizing a linear algorithm. The only requirement is that the linear algorithm has to be set up to compete with other copies of itself. So you then have to deal with a bunch of stuff like encoding populations of potential answers, creating mutation and crossover operators, defining selection thresholds, and creating competition scenarios.
The most difficult part, in many respects, is creating an intuitive user interface for all of it.
Re:Compiling C in parallel (Score:3, Funny)
Not in a functional language, particularly a pure one. This just ends up looking like map/fold or cata and ana morphism calls.
Re:Compiling C in parallel (Score:5, Insightful)
Bull. An industry like software doesn't depend on talent alone. If all the engineering disciplines relied on talent, we'd be in the stone ages.
The real issue is that there are shitloads of people in the software business who either have grown up knowing nothing at all but the imperative paradigm or that know about FP but think it isn't really useful or think is just the stuff os masturbatory braniacs.
As to any claims somebody would make of being a supercoder, I would be highly skeptical. The software industry is plagued be delays and bugs. Security bugs in Linux and Windows have become the norm. People have grown up thinking it's as natural for a computer to get a "virus" as it is for you to catch a cold.
Many a good people have worked hard on theories and products to produce safer code, but they are largely ignored, except in industries where it is critical (military, aviation, etc.). The average developer is an ignoramus.
Re:Convince your boss. (Score:5, Interesting)
That's not really true. Supporting instruction level parallelism is reasonably straightforward - you just analyses the per-instruction data dependencies and output things ordered in such a way that the hardware instruction scheduler can do it's thing. There's only one problem - optimally ordering a set of instructions is NP-hard. So if you want to do a good job ordering 5 instructions you're fine, 15 instructions is obnoxious, and 150 instructions is basically impossible.
Further, instruction level re-ordering doesn't change the basic algorithm. If you're compiling a heapsort routine, the re-ordered instructions are still going to implement heapsort. And heapsort doesn't parallelize well. Maybe you could build logic into your compiler to detect heapsort and automatically replace it with parallel quicksort, but that doesn't help you when you run into a non-sorting routine.
Realistically, programmers have to write parallel code for many-processor platforms. It's not amazingly difficult (given reasonable training and a reasonable set of tools), but it is different. But it's not something that's going to go away when the gcc guys / java team / VS.NET team implement some clever optimization.
Re:Convince your boss. (Score:3, Interesting)
It's true that standard hash algorithms are generally serial in nature. Luckily, you can use well-tested hash algorithms to build hash trees and get the same security properties with excellent parallelism. Some modern hash algorithms are even designed for tree modes to support parallel computation.
Re:Compiling C programs in parallel (Score:3, Informative)
Re:Convince your boss. (Score:3, Informative)
First, I'm not sure that I would use accounting procedures as my example of a "real world" problem, but whatever.
There are two more direct reasons why payroll computations aren't the sequential problem you're looking for. First, they're not CPU bound. No-one's saying that it would be so much easier to do a payroll calculations if their CPU were just twice as fast. Second, most companies doing payroll calculations have more than one employee. Sure, the procedure you described is serial - but there's no reason you can't do it for every employee simultaneously in parallel and still get correct answers.
A similar thing is true for many other problems: One instance is serial and computationally trivial, in order to be computationally expensive you must have many instances, in which case your whole problem is naturally very parallel.
Re:Convince your boss. (Score:4, Informative)
For a more lower-level example, almost any kind of query over some data set is an inherently parallelizable operation. This includes virtually all usages of list comprehensions, map and filter in Python, for example.
Re:Convince your boss. (Score:4, Funny)
Your idea is not feasible because it screws up too many marketing campaigns. Please revise your idea and run it through sales before submission to management.
Re:Convince your boss. (Score:5, Interesting)
Actually, the reason for this is because of the heat consumption. As the power of a chip grows, the heat consumption grows much faster, and more cores are a much better way to get more speed with less power consumption and heat.
Re:Convince your boss. (Score:3, Insightful)
That doesn't make sense. The power used by the chip will exactly equal the heat generated by the chip due to the law of conservation of energy.
Ok, that sounds reasonable.
Re:Convince your boss. (Score:3, Informative)
It makes perfect sense if you consider that when he says "power" he wasn't referring to power consumption, but processing speed.
aka Heat consumption increases faster than power does.
Re:Convince your boss. (Score:4, Insightful)
Another question you might ask yourself is, are you going to let the CPU designers push you into a programming paradigm you are not effective in?
This to me sounds like laziness. "But parallel programming is HARD!"
But I'd much rather see them speed the cores up than endlessly multiply the number of them. There is a *lot* of room left to do this.
Please elaborate on this further, because what you wrote that follows is all rather vague to me. Making cores faster means higher clock speeds and/or improved architectures. As far as clock speeds go, apparently, chip designers feel like they are running up against physical limitations that are making this difficult. And when it comes to improved architectures, what quite possibly the #1 thing that has been done over the years to improve performance? Increase parallelism. Pipelining, superscalar architectures, multi-threaded single cores, VLIW, etc.
Re:Convince your boss. (Score:5, Interesting)
personally, i think in terms of commodity computing, we don't really need to squeeze any more power out of the CPU than we've already got. use fully pipelined superscalar architectures, perhaps multithreaded dual or quad cores (for high-end workstations) and VLIW to maximize ILP efficiency. even at current processor speeds, 99% of the applications people (especially casual computer users) use have bottlenecks elsewhere (like memory & disk I/O speeds, internet bandwidth, user response time, etc.).
for the really resource-intensive stuff, like image/video/audio processing, cryptography, 3D graphics, CAD/engineering applications, scientific modeling, processing/manipulating financial data, etc. you would be much better off using a specialized dedicated vector coprocessor (as opposed to a general-purpose scalar processor that commodity CPUs tend to be). this way you can have a relatively low-power (and low clock rate) CPU for processing common SISD instructions that constitute 99% of all computing tasks, greatly cutting the cost of consumer and pro-sumer systems. and by using highly specialized coprocessors to handle the heavy lifting, those applications can be processed more efficiently while using less power (and at lower clock speeds) than trying to get a general-purpose scalar CPU to do the same work.
that is why GPGPUs are generating so much interest these days. it just so happens that most of the really processor-intensive applications consumers run greatly benefit from stream processing. game developers have long taken advantage of dedicated vector coprocessors with highly-specialized instruction sets and architecture made specifically for 3D gaming. DSPs with specialized architectures are also commonly used for hardware-accelerated video encoding, audio processing, etc. and now companies like Adobe are also seeing the advantages to using specialized vector coprocessors for their resource-intensive applications rather than having the CPU handle it.
and, honestly, how many different kinds of processor-intensive applications do most users run on a regular basis? if you're a graphic designer, your main processing power concern is only in relation to 2D/3D graphics. if you're an audio engineer or musician, then you're only going to use audio-related resource-intensive software. likewise, if you're a cryptographer/cryptanalyst, you probably won't ever run any audio editing software or 2D graphics software. therefore, it makes sense to pair moderately powered general-purpose scalar CPUs up with a powerful & highly specialized vector coprocessor like a GPU/DSP/Stream Processor/etc.
Re:Convince your boss. (Score:3, Insightful)
This to me sounds like laziness. "But parallel programming is HARD!"
That's probably a better argument than fighting the CPU designers. If parallel programming is hard, it's more expensive to create parallel programs. They'll take longer to write. It'll take longer to implement new features for them. It'll take more money to maintain them. All that seems like a good reason to avoid parallel programming.
On the other hand, if someone comes up with a new parallel programming paradigm that's slightly more difficult than procedural/object-oriented programming, but offers these benefits -- or if this exists already -- it'll make sense to switch to that paradigm as your performance needs increase.
No, look at the scope (Score:5, Insightful)
It is the cache coherency and memory bandwidth problems with existing architectures that are the problem. We need better low latency data transfer and significant improvement in auto-parallelism technology in compilers.
It should be clear that there has been very little serious investment in basic compiler technology and that is now needed. Academics have realised this but it takes time. The bandwidth issues are solvable else-when with more transistors.
Finally, we have a variety of programming paradigms OO, Functional & procedural and more each of which has a problem niche.
One thing we will certainly have to get away fom is the idea that 'legacy' code can carelessly be re-written in the flavor of month interpreted language eg Java, C#, Perl, Python or Ruby. You can write 95% of your code in a programmet friendly language. But the critical sections need to be in C, FORTRAN or Assembler and need to be very carefully optimized. That can give you x100 on the same architecture.
Re:No, look at the scope (Score:3, Insightful)
Your completely right that Perl, Python, and Ruby are pigs. Java looks a little like it was dropped on its head as a child. I don't know much about C#; it tastes too much like MSWindows for me to touch it.
While different programmers have different tastes in friends, modern functional languages actually have compilers that produce code competitive with C, and a garbage collector can be a performance advantage for some use cases (faster thread-local allocation and greater sharing of data structures). But for some things, ie most numerical code, you're right that you just can't beat a static language like C or Fortran.
Re:No, look at the scope (Score:3, Interesting)
We have also seen the merging of those paradigms over the years. Every mainstream language today, with the exception of C, has some form of OOP. Every mainstream language either has or is getting (e.g. C++0x lambdas) first-class function values - with the unfortunate (for Java) exception of Java, now that Neal Gafter has moved from Google to Microsoft. Many languages are introducing lazy sequences and libraries centered around them (Scala, LINQ, Python 3000) Also related is syntactic sugar for list comprehensions (Scala, Python). Then, there are languages that are specifically designed to be combined OO/FP - Scala, OCaml, F#.
Actually, it's worth learning FP today solely for this reason. FP has found its way into mainstream by piggybacking on top of existing OO solutions. For example, if you want to be a good C# or VB programmer today, you have to learn LINQ - and that is all about FP [microsoft.com]. You may not even be fond of it yourself, but there will be code written using it, and you'd better understand it well enough to be able to maintain and extend it!
Re:Convince your boss. (Score:3, Interesting)
You're cherry picking your data there (compilers, etc). To see what's out there, we must look at want is commonly done and if those things can benefit from parallel processing (and I see lots and lots of places, including the browser, where things could go parallel).
When we do that, we notice that what goes on in the gaming industry, soon becomes standard everywhere else. And both modern platforms, the PS3 and XBox 360 (I'm not including the Wii as Nintendo has different goals that having bleeding edge tech.) have multi-core processors. Radically different architectures, but multi-core none-the-less. We are also seeing this, and have been for a while, multi-core entering the desktop.
This isn't a coincidence. Moore's law has effectively ended for individual processors. Anything that might go against that is nothing more than a *very* short term dodging the inevitable. Multi-core is the only thing that is going to see our computers get faster. This does make it a software problem. And a problem it is.
Why? Because for the longest time threading has been available, but basically no-one has been teaching it nor developing better techniques to do it. So, we have grossly under experienced/under trained people out there that are now having to deal with the reality of the situation. This is going to cause some initial problems and lots of people are going to bitch about it. But, that is relatively short term.
The gaming industry will work out many of the problems and will share a lot of that information. Insomniac Games is a good example of this. But, there will still be growing pains. But, it shouldn't be perceived as anything else but growing pains.
Quick, forward your thoughts to intel! (Score:3, Insightful)
Apologies (honest) for the sarcasm, but do really think that if the CPU vendors had any useful room left to increase sequential processing performance that they wouldn't use it? Are 3D layouts out of the research labs yet? Are production fabs for 3D chips feasible?
I.e. what basis is there to think CPU designers have any choice (for the mid-term) but to spend the transistor income from Moore's Law on parallel processing?
Re:Convince your boss. (Score:3, Interesting)
True. OpenMP is great. But it's not the only solution.
OpenMP, MPI, and all are pretty much only useful if you are programming for a multi-computer (not multi-core, or multi-proc) environment - e.g. supercomputer.
When dealing with just one computer and either multi-proc or multi-core, there's a much simplier solution: locks. Works great with doing multi-threaded and even multi-processed programming.
Most programs would simply benefit from having a second thread. (Ever wonder why your program's interface hangs? Usually because it's only one thread and it's in a processing loop and not updating the GUI. A second thread solves that.)
And multi-document programs (e.g. FF, IE, etc.) benefit by having 1 main thread, and another for each document. (Thread, Process, take your choice. Either way...it benefits.)
But you also have to remember - your program is not the only program on the system. So additional cores that you are not using can be provided to a different program so it can run in parallel, thus making the computer more useful for the user - and let them do more, faster - even if its not with your program.
Broken Algorithm BS (Score:5, Insightful)
When you move to FP, all your algorithms break
If moving to a functional programming language breaks your algorithms, then you are somehow doing it wrong. That line doesn't even make sense to me. Algorithms are mathematical constructs that have nothing to do with programming paradigm. Assuming the language is Turing complete, how is that even possible?
Re:Broken Algorithm BS (Score:5, Insightful)
Re:Broken Algorithm BS (Score:5, Informative)
That's where FP comes into play, as it allows developer's to develop heavily parallelized code that is also safe and fault-tolerant.
Re:Broken Algorithm BS (Score:5, Informative)
Moore's law states that the number of transistors on a chip will double every two years. By definition it's a hardware problem.
Obviously, utilizing those transistors is a software problem, but Moore's law doesn't say anything about that.
The article sucks. The author seems to know FP about as well as he knows Moore's law.
Re:Broken Algorithm BS (Score:3, Insightful)
I think Moore's law somewhat implies that those transistors will be used. It suggests an increase in computational power to parallel the number of transistors, and if the transistors go unused, they effectively do not exist.
By the same token, I'll believe that FP is the way of the future when I see it. Don't get me wrong - in a lot of ways I like functional programming. Recursion is so much more straightforward than iteration. However, tasks that can be completed with an iterative polynomial-time algorithm often end up exponential when recursive. Of course, a bit of tail recursion and you can deal with that, but some things aren't easily tail recursed (I believe that is what TFA means by all your algorithms break.) And really, when it comes right down to it, tail recursion is just contorting an iterative approach to look recursive.
Re:Broken Algorithm BS (Score:4, Insightful)
It isn't recursion vs. iteration but rather pure vs. environmental (i.e. mutable variables) that make parallelism safe.
Re:Broken Algorithm BS (Score:3, Informative)
However, tasks that can be completed with an iterative polynomial-time algorithm often end up exponential when recursive. Of course, a bit of tail recursion and you can deal with that, but some things aren't easily tail recursed.
That is not so. Recursion is more general that iteration, and if you make use of that extra expressive power, you can get higher-complexity algorithms. But then these algorithms would be just as complex if implemented with iteration and an explicit stack, or whatever data structure is needed.
The canonical example where iteration is linear and naive recursion is exponential is the computation of the Fibonacci series. But the simple iterative algorithm is by no means obvious - or, on the other hand, a linear-time recursive algorithm is not significantly harder to come up with.
Problems that are not easily tail-recursive are also not easily solved with simple iteration.
Re:Broken Algorithm BS (Score:5, Informative)
It's true that Moore actually said the transistor count doubles every 18 months. However, for a long time, an immediate corollary of Moore's Law was that software doubled in speed every 18 months, which is essentially why Moore's Law as important. I think what they author is trying to say is that in order for this corollary to remain true, people must learn to write parallel software. It is much easier for a compiler to get an FP running in parallel than a sequential program (SP) running in parallel. Hence, those who can write in FP languages will be better suited to write the software of tomorrow than whose who can only write in SP languages.
Re:Broken Algorithm BS (Score:4, Insightful)
>> How can Moore's Law ever be a software issue?
In a sense, it can be: if we start rewriting Java/C#/VB apps in assembler, I'm pretty sure the performance will at least double each year, and we can forget about those cores for good.
Thermodynamic computing (Score:5, Funny)
Pure functional programming removes all side effects. This make memory optimization (critical to efficient multiprocessing) much easier. It also makes garbage collection easier - but that is pretty much canceled by an increase in garbage.
But beyond functional programming is thermodynamic computing. This starts with functional, but requires all operations to be reversible. Ideally, the total electrons are conserved - you can never clear a bit - just exchange bits (and of course more complex operations like add, mul, etc - but all reversible and charge conserving). Of course real hardware will still need to make up for losses, but power consumption and heat go way down.
The fascinating thing is that thermodynamic programming requires a pool of known 0 bits and known 1 bits. As the algorithm progresses, you can't just throw away results you aren't interested in - you collect the unwanted results in an entropy pool. Eventually, you run out of known bits, and need to clear some entropy bits in order to continue. This takes lots more power (like erasing a flash block). The analogy to real world entropy is striking.
Re:Thermodynamic computing (Score:5, Funny)
It is sad this was moderated "funny" rather than "interesting"
Re:Thermodynamic computing (Score:3, Informative)
Cutting edge, yes. But not warp drive.
http://www.cise.ufl.edu/research/revcomp/ [ufl.edu]
Re:Functional Programming Is a Red Herring (Score:5, Informative)
You seem to have some serious misunderstandings here.
Uh, no. By removing side effects functional programming removes the need to copy anything. If I'm trying to evaluate f(X) + g(X) for some complicated X, f, and g by evaluating f(X) and g(X) in parallel and adding the results, I don't need two copies of X because I know that neither f nor g will modify it. That's the whole point.
It only seems counter intuitive if you've swallowed the procedural programming paradigm and adopted it as your own to the point where you've forgotten how counter intuitive "X = X + 1" seemed at first.
And saying it's non-deterministic is just nuts. Sure, you could add non-deterministic semantics to any language, but there's nothing inherently non-deterministic about functional programming. In fact, I think you'd typically have to work a lot harder to make a functional language non-deterministic.
FP has nothing to do with threads, apart from the fact that functional programs could be executed by a large number of threads in parallel (or independent cores, or...?) without changing the outcome. And what exactly is the mess we're in? I can't think of another industry that has succeeded so spectacularly in such a short time.
And so on...did I just feed a troll?
--MarkusQ
Why some languages don't catch on. (Score:3, Interesting)
Between picking a programming language that is powerful (expressive/concise) for all the code you have to write (insert random FP language here) and picking a programming language that is powerful for all the code you don't have to write (perl, python etc), I'd pick the latter in most cases.
Even if an FP language is 2x more concise than perl for a given task (AFAIK it isn't), without lots of decent libs, you usually have to write a lot more code.
The more code you have to write, the more code someone else later on has to read and understand. And worse the more code that has to be documented and maintained.
Standard libraries are important, even if they are only defacto standards because if you use a standard library, even if it's buggy, when an experienced programmer looks at your code, they know what the lib does (and hopefully its bugs), so they only need to read your code, they don't have spend time looking at the lib.
Whereas if you were writing your own custom libraries, or using one of 100 possible libraries (with no defacto standard) out on the internet, the person taking over or helping out will have to spend extra time trying to debug/understand it.
Some of the FP languages are catching up in terms of standard libraries, but for many you either have to write your own crap or have to waste lots of time finding and evaluating libraries to see if they are worth using.
Nobody in their right mind wants to deal with 10 different print commands (ok maybe the php people are different
I'd rather be able to get on with writing the more "interesting" bits ASAP.
Re:Broken Algorithm BS (Score:3, Insightful)
The idea is that it's still easy to add more transistors, but no longer easy to make them run faster. That is why they are moving to multiple cores - they can add three (or even 15) cores far more easily (and using much less energy and putting off much less heat) than they can make a single core run at twice the clock rate.
Moore's law still exists in hardware, but it's manifested in a different way than it had been until a few years ago, due to physical limitations that don't appear to be going away any time soon. But software needs to adjust for those changes to benefit us in the real world.
Of course not all individual tasks can be sped up with parallelization, but many tasks, and the overall computing experience, probably can be.
Re:Broken Algorithm BS (Score:3, Interesting)
Moore's law dealt specifically with hardware. To say that "Moore's law is now a software problem" shows as much of a misunderstanding as attributing "any mention of Nazis causing someone to immediately lose the argument" to Godwin's Law[1].
It's reasonable to suggest that increasing the speed at which software runs will start requiring learning multi-threaded programming techniques, but to say that software will allow Moore's Law to continue is incorrect.
The idea that performance will increase at the same rate as doubling of transistors is attributed to a colleague of Moore. I have not heard of a moniker for this "law."
[1] Godwin's law states that as a USENET thread increases in length, the probability of a comparison to Hitler or a Nazi goes to 1. Godwin's law states nothing about the thread ending or about a winner or loser.
Re:Broken Algorithm BS (Score:4, Interesting)
Think of it as being "Moore's law is now also a software problem".
In the past, Moore's law meant that you could buy new hardware and have your stuff go way faster without any further work.
Now, Moore's law mainly means that you get more parallelism. Without software work, this means that your stuff runs at the same speed it used to. Thus software also needs to change in order to obtain the benefit.
Re:Broken Algorithm BS (Score:5, Interesting)
Well we've been using the same basic single-core architecture for the last, what, 30 or 40 years? Now programmers have a much bigger challenge in front of them - taking a program and make it work in a new environment.
I don't honestly believe there's been much in the way of innovation in the programming world for a while. Sure, you might have new coding languages that can do some things better than others or process it a different way, but don't they all operate on the same basic principle? Now programmers are faced with a complete paradigm change - the old style of programming isn't going to cut it 10 years from now when everything from your computer to your coffeemaker has a multi-core processor.
Engineers deal with stuff like this all the time. More than a few programmers use the term "software engineer". It's finally time for them to prove they can live up to that name and innovate.
Re:Broken Algorithm BS (Score:3, Interesting)
It's rather funny that you say that the same hardware architecture has been used for the last 30 to 40 years and then observe that there hasn't been any innovation in the software world "for a while".
Multicore isn't an innovation, it's a product of a lack of innovation in the hardware world.
Re:Broken Algorithm BS (Score:5, Insightful)
While algorithms won't break, you'll certainly have to rewrite a lot of them to take advantage of multiple processors.
This problem is not new.
The solutions are out there, and are also not new.
Article is pure shit.
Re:Broken Algorithm BS (Score:3, Informative)
Easy. Pure functional programming doesn't permit side-effects. Algorithms that perform I/O at various points in the algorithm can't easily be expressed in languages like that.
Also, although some popular functional languages like ML and Erlang have hacks to get around this, purely functional programming doesn't like a function modify global state. Without those hacks in the language, algorithms that require in-place modification of arrays (such as some sorting algorithms) can't be expressed at all in those languages. (You can modify the algorithms to not do in-place modifications of arrays, but then that's not the original algorithm any more.)
An brief introduction to functional programming (Score:5, Insightful)
>>When you move to FP, all your algorithms break
>If moving to a functional programming language
>breaks your algorithms, then you are somehow
>doing it wrong. That line doesn't even make sense
>to me. Algorithms are mathematical constructs
>that have nothing to do with programming
>paradigm. Assuming the language is Turing
>complete, how is that even possible?
You are confused about the definition of an algorithm, and the significance of Turing completeness.
First of all, an algorithm is a *way* of doing things with an associated complexity specification (a mathematical description of how long it will take to run often denoted like O(n)).
Two turing equivalent machines don't necessarily support the same algorithms, although they will always have *equivalent* algorithms that get the same job done. HOWEVER, those algorithms don't necessarily have the same complexity. For instance, on turing machine A a sort might be done in O(n^2) while on turing machine B a sort can only be done in O(n^3).
To be functional means to be stateless. If you don't have state, then all sorts of algorithms become much more expensive. Notably, it's impossible to do a quicksort in a functional language, although other less efficient sorts may be done. Some people respond to that by saying that you can just buy a faster computer if you want to run functional algorithms; however, anyone with a decent computer science education knows that this can't solve differences in assymtotic complexity.
NOTE: quicksort (which cannot be done functionally) does not have better worst case (big O notation) complexity than mergesort (with can be done functionally), but it does have best average case and takes advantage of the underlying machine implementation much better. In some ways it is a bad example, but most people are familiar with sorting, whereas few people are familiar with dynamic algorithms.
The reason that functional programming languages exists goes back to Church and Turing. Church invented lambda calculus, and Turing invented Turing machines. Both are computationally equivalent in their power.
Turing machines have state, and are essentially a description of a hypothetical machine. Lambda calculus, is well, a calculus. It is functional in nature and has no state.
Not surprisingly, real world computers look more like turing machines than they do Lambda calculus evaluating machines. Also, virtually all programming languages are built around state manipulation, since that's what the underlying hardware has to do.
The idea of a functional programming language is to emulate the lambda calculus on a rough approximation of a Turing machine. Technically it's possible for any Turing equivalent machine to emulate any other. However, since the two machines are so different, this makes things dog slow. Again, faster computers don't solve this problem because there is an assymtotic difference in complexity, not a constant factor difference.
Re:An brief introduction to functional programming (Score:4, Informative)
Notably, it's impossible to do a quicksort in a functional language
Some Haskell guy seems to disagree [haskell.org].
Why would the Algorithm break? (Score:3, Interesting)
I don't see why an algorithm would break just because you're changing language type, the whole point of an algorithm is that it's programming language independent.
Re:Why would the Algorithm break? (Score:4, Informative)
This is why functional programs are more suited for concurrency, and this is why your sequential algorithms will fail to work.
Re:Why would the Algorithm break? (Score:3, Informative)
You mean to say that in a functional programming language, variables aren't? A paradox, a paradox, a most delightful paradox!
Re:Why would the Algorithm break? (Score:5, Informative)
Functional variables are like mathematic variables - they're variable in that you may not have discovered their value yet, but once you discover their value it stays the same for the current instance of the problem. For the next instance of the problem (i.e. the next call to the function), you're talking about a different set of variables that potentially have different values.
Re:Why would the Algorithm break? (Score:3, Insightful)
If you're writing an algorithm in a programming language you're doing it wrong.
Amdahl's Law (Score:5, Insightful)
Question is, how realistic is that?
Amdahl's Law also tells us tells us that the amount that parallelization can speed something up is ultimately limited by the parts that can't be done in parallel.
Re:Amdahl's Law (Score:5, Informative)
Re:Amdahl's Law (Score:5, Funny)
Re:Amdahl's Law (Score:5, Funny)
Scheme (Score:5, Funny)
(have I (feeling ((become popular Scheme) again)))
Oh noes! (Score:4, Funny)
Lisp! NO!!!!!!!!!!!!!!!!
Rolls over and dies...
(added to make filter happy)
it's always a good time to try functional (Score:5, Interesting)
It's been said in the comments on slashdot many times. Learning functional programming techniques will improve your programming skills. There are many good functional languages out there, and many have imperative features for ease of transition. No functional will not solve all of your problems, but it will give you that most valuable of all lessons, how to think about a problem _differently_.
You don't need an excuse, start today.
Re:it's always a good time to try functional (Score:3, Interesting)
What language would you recommend for an OO programmer to start with?
Re:it's always a good time to try functional (Score:5, Informative)
I would say Haskell, but I think that's the language everyone should learn, so I'm biased. The typeclass system provides for some of the functionality of object oriented programming.
If Haskell scares you, Ocaml is another good choice. It's a multi-paradigm language with an emphasis on functional programming, but it also allows you to use mutable state wherever you like (whether this is a good thing or not is a matter of some debate). It even has some traditional object-oriented programming features, but they tend not to get used much in practice.
If you care about performance, they both have decent native-code compilers. My impression is that Ocaml is a bit faster for single-core tasks, but Haskell's parallel programming features are much better.
Re:it's always a good time to try functional (Score:3, Informative)
Ruby is nowhere near a functional language. It's imperative as anything else.
Re:it's always a good time to try functional (Score:5, Interesting)
You don't need an excuse, start today.
The excuse is: it's fun. But if you do start, choose the right language for the job. Python for example seems good for fp, but was not designed for the task. Don't choose a language that simply supports functional programming. Choose a language that was designed specifically for functional programming. You'll be happier in the long run when you don't run into limitations of the language you choose.
My 2c.
Re:it's always a good time to try functional (Score:5, Informative)
Python for example seems good for fp
Last time I heard this, I checked, and the python developers were refusing to commit tail-recursion optimisation patches because it 'made debugging too hard'. Since most functional algorithms are tail-recursive, you will blow your stack very quickly without this. It's even in GCC, meaning that C is better suited to functional programming than Python.
Re:it's always a good time to try functional (Score:3, Informative)
s/stack/wad
dang newbies
Formal Methods Initiative (Score:5, Insightful)
This reminds me about the /. article "Twenty Years of Dijkstra's Cruelty" [slashdot.org], just a few days ago.
Problem boils down to fact that programming is in fact a very advanced calculus. And writing a program is 'deriving' it. As in reaching a correct formula with a proof that it's correct. That's how software should be written anyways. And functional programming will only make it *simpler*, not harder.
Question (Score:3, Interesting)
So a quick question before I go and master FP...does the compiler automatically compile the code that can be done in parallel in the proper "way", or do I have to specify something?
Also, if I rewrote an app written in an imperative language to a FP one like Haskell, would I see a that much of a difference on a multi-core processor?
Re:Question (Score:3, Interesting)
The way it should work is that if you're implementing a pararell algorithm like merge sort http://en.wikipedia.org/wiki/Merge_sort [wikipedia.org] it should just go and get a new processor core each time you make a recursive call.
Moore's Law? (Score:3, Insightful)
How is maintaining the rate of increase in the number of transistors that can be economically placed on an integrated circuit a software problem?
Function is easy (Score:4, Insightful)
The biggest problem with functional languages tends to be their type systems (I'm looking at you, Haskell). A functional language with a nice type system, like Erlang, is easy to pick up. And the example I picked totally at random, Erlang, also happens to have CSP primitives in the language, which makes parallel programming trivial. I've written code in it and then deployed it on a 64-processor machine and watched it nicely distribute my code over all 64 processors. If you program in a CSP style (which is easy) then your code will exhibit 1000-way parallelism or more and so will trivially take advantage of up to this many processes.
And, actually, object orientation is a good option too. Alan Kay, who defined coined term, defined it as 'simple computers [objects] communicating via message passing' - sounds a lot like CSP, no? The main difference is that OO is usually implemented with synchronous message passing, but you can implement it with asynchronous messaging (actor model) then you have something almost identical to CSP. You can also add this implicitly with futures. I've done this in Objective-C for Etoile. Just send an object an -inNewThread message and any subsequent message you send to it is passed via a lockless ring buffer to the other thread and executed. We use it in our music jukebox app, for example, to run the decode in a separate thread from the UI. Implementing it in the language, rather than the library, means you can do it more efficiently, so this by no means replaces Actalk or Erlang in the general case, but modern processors are fast serial processors so it makes sense to program much larger chunks of serial code on these systems than Erlang or Actalk encourage.
Suggested reading. (Score:5, Informative)
I've recently gotten into FP. I started with Erlang and then branched into ML and Haskell. In case you're interested, here are the best books I've encountered for each language:
Programming Erlang [amazon.com]
Programming Haskell [amazon.com]
ML for the Working Programmer [amazon.com]
Also, I'd definitely recommend starting with Erlang, because the Programming Erlang book made for a very easy introduction to functional programming.
Re:Suggested reading. (Score:3, Informative)
Which is more likely? (Score:5, Insightful)
A. Many programmers start writing or re-writing their code in functional programming languages.
or
B. Programmers continue writing to their platform of choice, e.g. .NET, Java, etc., and the guys writing the virtual machines do the heavy-lifting, making the VM execute more efficiently with multi-cores?
I'll go with B.
Apple is already proving this. Mac OS X Snow Leopard will have a lot of this built-in. Read about "Grand Central."
Re:Which is more likely? (Score:3, Interesting)
Corollary to B:
Visual Studio 2010 will make parallel and multithreaded programming easier to accomplish. Essentially, instead of just
for(x=0;x1000;x++) dosomething();
you'll have
parallel_for(x=0;x1000;x++) dosomething();
and 2-1000 threads will kick off at once. I'm sure there can be thread pooling, locking, etc., but if the code is segmented well enough then parallel programming for 32 cores is probably not that traumatic for 90% of the world's code needs. Get into some high performance stuff or time-critical code, and you're probably already past the point where VS2010's new parallel code library is interesting.
Sounds like BYTE magazine in 1985 (Score:5, Interesting)
Look at the table of contents of this BYTE magazine from 1985 [devili.iki.fi]. In a nutshell it said the same thing as this article: Functional languages are the great hope for solving the parallel programming problem. Only then the languages were different: Hope, Linda, and Prolog were among them.
My response back then was to get excited about FP. My response now is: Where is the proof? Can anyone name a single instance where a functional paradigm has yielded the best measured performance on a parallel computing problem? In other words, take the best functional programmers in the world, and pair them up with the best tools in existence. Can they actually create something superior, on any problem running on any hardware? This is a very low bar, but until it's demonstrated FP will be confined mostly to the lab.
IMHO the path forward is to treat parallel programming like just another optimization. As we know, the vast majority of your code doesn't need to run fast, and you get most of the performance benefit by optimizing small bits of code that really matter. I suspect the same thing will happen with parallel programming: In a given application only a few areas will benefit much from parallelism, and these tasks will probably be very similar across applications. Graphics rendering, large matrix math, video encoding/decoding, and speech recognition would be examples. People will treat these as special cases, and either develop special-purpose hardware (e.g., GPUs), or libraries that encapsulate the nitty-gritty details. The interesting question to me is what is the best runtime model to support this.
Re:Sounds like BYTE magazine in 1985 (Score:3, Interesting)
Functional Programming is not a buzzword that is supposed to be better at paralellism. When coding in a stateless fashion (what FP is all about), function reduction can be split transparently across many computers. There are no locks, no funny semaphores and mutexes, no deadlocks, no nothing. It just works, because of its uncoupled nature of "no side effects, ever".
There is one kind of early optimization that is not premature: architectural optimization. If you design your whole system to be synchronous, you trade off scalability for simplicity. If you design your whole system around the imperative paradigm, there will be a significant amount of work involved in making things work in parallel environments. There is no amount of later optimization that will fix this kind of architecture issues.
Functional design is an architectural decision.
that hasn't materialized, though (Score:5, Interesting)
Auto-parallelization of functional programs has been proposed for decades now, and every attempt has fallen on its face as the overhead has killed any gains. Current parallel FP research isn't even putting that much effort into auto-parallelization, because most PLs researchers consider it a dead end---taking a random FP and evaluating all its thunks in parallel as futures, or some similar mechanism, is not going to solve the problem.
Instead, most of the current research is in programmer-level primitives for designing and specifying inherently parallel algorithms. There is some of this in both the FP and non-FP communities.
Re:Sounds like BYTE magazine in 1985 (Score:5, Informative)
My response back then was to get excited about FP. My response now is: Where is the proof?
Whether functional programming is the best paradigm to use for parallel computing is undecided. But it does have a couple of advantages over imperative programming.
First, imperative programming specifies the order of evaluation, whilst functional programming does not. In Haskell, for instance, an expression can essentially be evaluated in any order. In Java, evaluation is strictly sequential; you have to evaluate line 1 before line 2.
Second, imperative languages like Java favour mutable data, whilst functional languages like Haskell favour immutable data structures. Mutability is the bane of parallel programming, because you have to have all sorts of locks and constraints to keep your data consistent between threads. Programming languages that do not allow mutable data don't have this problem.
1985 vs.2008 - no more free speedup (Score:3, Interesting)
The difference between 1985 and today is the end of the free lunch in clock rates. 1985 was roughly the middle of the last great clock rate stall - during the shift from minicomputers to micros, Moore's Law expressed itself not as speedup, but as decrease in cost per bit/gate.
Then we entered the great die-shrink and clock rate increase era, where for about 20 years processors really did get twice as fast every 12-18 months. Why expend the effort to bury a problem in parallel hardware when you can see faster serial hardware coming over the hill?
Clock rates have stalled again, and we're reaching physical limits for our current fabrication methods and physical chip designs. We're seeing renewed interest in functional programming because it looks like a way to make use of parallel hardware safely compared to imperative coding. Traditional coding methods are easier to understand, and are probably more efficient when they work, but...
how fast do you want the wrong answer?
example (Score:5, Interesting)
As an example of the learning curve, I wanted to learn a little OCaml. I played around with this [inria.fr] insertion sort example. I used it to sort a list of 10,000 integers, and it took 10 seconds, versus <1 second in C with linked lists. Not too horrible. But changing it to 100,000 integers made it die with a stack overflow, so I'm guessing that its memory use goes like n^2. However, it's not at all obvious to me from looking at the code that this would be the case. I think if I wanted to do a lot of OCaml programming I'd have to develop "FP Eye for the Straight Guy." Probably if you wanted to make it perform better on big arrays you'd want to make it tail-recursive, but it's not totally obvious to me from the code that it's *not* tail-recursive; although the recursive call isn't the very last line of code in the function, it is the very last thing in its clause...?
I know of at least one well known OSS project in Haskell, written by a very smart guy, that is really struggling with performance issues. I wonder whether bad performance is to FP as null-pointer bugs are to C. Sure, a sufficiently skilled programmer should theoretically never write code that will dereference a null pointer, but nevertheless my Ubuntu system needs a dozen security patches every month, many of which are due to null pointers, buffer overflows, etc.
don't bother (Score:3, Interesting)
When people say that they want to learn "functional programming", they usually mean some functional programming language. Common functional programming languages (OCAML, SML, Haskell, Lisp, Scheme, etc.) are duds when it comes to performance: in principle, they encourage parallel-friendly programming, in practice, they get many basic language design and performance issues wrong.
It's also silly to rewrite large amounts of code when usually only a small percentage of the code is performance critical. If you need to speed up C code to run fast on a multicore machine, optimize the inner loops. All the "functional programming" you need for that is available right within C in the form of OpenMP. You'll understand OpenMP a little better if you have used a "real" functional programming language, but the concepts aren't rocket science and you can do just fine as a C programmer.
Re:don't bother (Score:3, Interesting)
i kind of agree, don't bother about fp.
the thing is, our applications are today running hundreds of processes in a box, even if you get 256 cores on a cpu, we'll still keep them all busy without a change.
who ever wrote the logic for the article was probably on coke or smth, seriously, no company with over 50 employees (and hence reasonable salaries for non-owners), will migrate to any functional language any time soon, they can't afford it, they don't have time for it and most of all, they don't want a software that only works due to the knowledge of "those 2 guys" ...
people want software that a lot of people can understand, people want to hire people that can be replaced (in case they get hit by a train or go over to the competitor). nobody wants "this unique piece of s.it that is 5% faster". welcome to the "enterprise".
big players just don't care. they want simple solutions that can be extended and handled cheaply. buying an extra box with a network connection behind it is cheap. hiring niche people is expensive as hell.
enough complained, i think those who have ever worked more than a few years in big companies understood the point already while reading the title or article...
Re:Slowly getting there (Score:3, Funny)
parallel algorithms, mutable data, and STM (Score:5, Insightful)
While pure functional code isn't allowed to manipulate mutable, shared state, functional languages often provide some mechanism to mix "pure" and "impure" (stateful, imperative code).
In the haskell world, there is the IO monad, which is sort of a sandbox where anything is allowed. Functions within the IO monad (often called "IO actions") are allowed to invoke other IO actions or call pure code, but the reverse is not true; pure code cannot invoke an IO action. Also, due to laziness, pure functions that were passed an unevaluated "thunk" as an argument might trigger deferred IO, but this is (usually) transparent to the programmer.
So far, this doesn't sound any better than a pure imperative language, but there is also an STM monad (for software transactional memory) which is like pure code except that you're allowed to access shared mutable data through a restricted API. STM is based on the idea that if two processes race and manipulate the same shared data structures at the same time, the conflict can be detected by the run time system, which can stop and replay the transaction one after the other.
The reason STM transactions can be safely replayed by the run-time system is that the language guarantees that the STM transaction doesn't have any hidden state somewhere, that might cause problems if the transaction were replayed. This is not a guarantee you can make in C, C++, Java, or any other popular language that I am aware of.
Note 1: It is possible for STM transactions to livelock if they continually conflict and are replayed, so you can still shoot yourself in the foot. However, it does make avoiding certain other problems much easier.
Note 2: I'm not really a haskell guru, so everything above should be taken with a grain of salt. Real World Haskell has a chapter [realworldhaskell.org] on STM, which is the basis of my current understanding (I haven't yet had cause to use STM in any program I've written.)
Re:parallel algorithms, mutable data, and STM (Score:4, Informative)
I agree. I don't understand them very well, but fortunately one can make use of them without a complete understanding. For many programs, it suffices to understand that if you want to do IO, you need to do it within the IO monad (often in the IO action "main" which is equivalent to "int main() {...}" in C).
This doesn't mean you have to write whole file parsers in monadic fashion, which is what I thought until I actually had occasion to write a file parser in Haskell, and found that I could write the parser as a pure function that accepts a string and returns the parsed data structure. Then, in main, I just read the file in with hGetContents (which takes a filename and returns a string) and then pass that string to my pure file parser.
This would seem horribly inefficient for large files; what if there isn't enough memory to store the whole file as a string? But here, laziness comes to the rescue. The actual file IO doesn't happen until the file parser starts reading in the string. My pure function churns along, oblivious to the fact that it's causing deferred IO in the background, simply from forcing the evaluation of the string's value.
This is, perhaps, not a very great insight into the nature of monads, but I thought I would share my experience, and if not enlighten then at least show that one can write normal programs in a fairly straightforward manner without being fully enlightened.
Re:Multi Threaded programming (Score:5, Insightful)
Parallel algorithms are fundamentally different from sequential ones. Take sorting. No multi-threading is going to help you if you keep implementing quicksort. While many problems are inherently parallel and it is easy to undo their serialization, several others will turn into bottlenecks. I am almost done with my Ph.D. and still I haven't received a proper education in parallel algorithms. It'll take a whole new generation of CS teachers to make the grand paradigm shift.
Re:Multi Threaded programming (Score:5, Informative)
Huh? Quicksort is pretty easy to parallelize [wikipedia.org], due to its divide and conquer nature: it splits its list into sublists and recurses on those sublists.
why FP is helpful for parallel tasks (Score:3, Informative)
Many functional languages do not allow the program to manipulate shared state, which removes that whole class of problems; information sharing is limited to letting the run time system fork threads and copy function arguments and return values back and forth.
However, if you really do need shared mutable state (and there are plenty of algorithms that do), in Haskell there's software transactional memory [realworldhaskell.org], which provides a restricted API for manipulating shared mutable state with the STM monad. Since the code within an STM transaction is not allowed to cause side effects or access any shared mutable state outside of STM, the runtime system is able to stop and replay transactions whenever it detects conflicts. This is one area where FP actually can make things easier, by enabling the same sorts of operations you might do in an imperative multithreaded program, but doing it in a much safer way.
And, if you don't need shared mutable state, sometimes parallelism can be achieved just by replacing "map" with "parMap" in the right places and recompiling with the appropriate options. It doesn't get much easier than that.
Re:Python FP -- only slightly like real fp (Score:3, Informative)
Python has some features inspired by FP languages including Haskell, but is not anything like real functional programming. Haskell is far more powerful and serious, but also a heck of a lot more difficult to use. Python has a "practicality beats purity" mantra; you could think of Haskell as "purity strikes back".
Stuff Haskell gets you:
Serious native-code compiler whose output can beat compiled C code for some programs (and does darn well on average, see the Alioth shootout)
Ability to use multi-core parallelism, with a library module that treats shared memory as a transactional database, allowing use of shared data between threads while getting rid of all the lock management headaches of languages like Java. This can work because Haskell's functional purity guarantees that the threads won't clobber each other except under circumstances precisely controlled by the library.
Data parallelism allowing computations of list comprehensions to automatically be done in parallel on multiple CPU's
Rigorous type system (nothing like the broken ones like in C or Java that you might be used to) lets you express complex invariants in your datatypes so that errors can be caught by the compiler. This greatly decreases the amount of runtime debugging required.
I could go on, but you get the idea.
Good tutorial: http://learnyouahaskell.com/ [learnyouahaskell.com]
More detailed book (full text online): http://book.realworldhaskell.org/ [realworldhaskell.org]
Haskell has a very steep learning curve compared with other languages (or "unlearning curve", as some put it, since you have to forget everything you knew), but learning it (a still ongoing process for me) is one of the most interesting and mind-expanding things I've ever done as a programmer.
Re:take it from a computational physicist (Score:3, Insightful)
He's probably talking about Markov chain Monte Carlo, not the simple Monte Carlo where you just draw random numbers. You can run a bunch of Markov chains in a parallel manner with no communication, so MCMC is perfectly parallelizable in that sense. However, that often doesn't help you, since the main problem is usually a slowly mixing chain. Running N poorly-mixed chains isn't much better than running one. What you'd really like is to speed up a single chain. There are parallel MCMC algorithms out there that try to swap members from parallel chains to speed up each individual chain, but they're not always worth the effort.
The algorithms really do break (Score:5, Informative)
Let's say you have a few thousand (name, address) pairs and you want to be able to quickly look up a name to get the corresponding address, to add new names, etc. In imperative programming you'd probably use one of the mainstay data structures of CS 101, the good old hash table. To add a new name, you hash it and go and poke that address in the table to record the entry.
Well remember that stuff about values in functional programming being immutable? Right, no hash tables in functional programming. You'd instead use something like an AVL tree or red-black tree, that let you create a completely new structure that shares most of its content with the old one, except that the new one has this extra node. Of course FP language libraries come with modules for making those structures, and in practice you can use them at the API level sort like how you used to use hash tables, but they are completely different underneath, and if you want to program them yourself you are going to have to learn a lot of very basic techniques from scratch all over again. Chris Okasaki's book "Purely Functional Data Structures" is a good place to learn about this stuff in detail.
Even more basic: the good old "for" loop, which updates an index variable each time through. Whoops! You can't update the index in a functional language, so there's no "for" loop. You instead use recursion, or a "higher order function" (function that operates on other functions). So instead of
for (i = 0; i < n; i++) xs[i] = f(ys[i])
You'd write something like
ys = map f xs
("map" takes a function f and a list of values xs, applies the function to each item in the list, and gives you back a new list). There is also a "list comprehension" syntax that you might know from Python:
ys = [f(x) | x <- xs]
but for complicated functions you end up having to use higher order functions and recursion explicitly. You really have to think a lot harder to program 20 lines of Haskell than 20 lines of C. But those 20 lines can do an order of magnitude more.
(Aside:) In case you were wondering, yes, you can implement traditional hash tables and other mutable structures in functional languages, and there are times when it's necessary, but it's comparatively a pain in the ass and you give up some of the advantages that had you programming functionally in the first place. Here is an article about someone's experiences switching from a mutable structure to a functional structure in a large program, and the headaches the functional structure solved:
http://www.cs.tufts.edu/~nr/pubs/zipcfg-abstract.html [tufts.edu]