Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Time to Get Good At Functional Programming? 620

prone2tech writes "From an article at Dr. Dobb's: Chipmakers have essentially said that the job of enforcing Moore's Law is now a software problem. They will concentrate on putting more and more cores on a die, and it's up to the software industry to recraft software to take advantage of the parallel-processing capabilities of the new chips. As is argued in this article, this means becoming proficient in parallel functional programming. The bad news? Getting good at functional programming is hard, harder than moving from iterative Pascal or Basic or C coding to object-oriented development. It's an exaggeration but a useful one: When you move to FP, all your algorithms break.'"
This discussion has been archived. No new comments can be posted.

Time to Get Good At Functional Programming?

Comments Filter:
  • by narcberry ( 1328009 ) on Friday December 05, 2008 @08:25PM (#26009343) Journal

    You mean oo isn't the only option?

    • by fyngyrz ( 762201 ) * on Friday December 05, 2008 @08:47PM (#26009541) Homepage Journal

      Another question you might ask yourself is, are you going to let the CPU designers push you into a programming paradigm you are not effective in? Personally, I can see a machine being quite useful with say, 16 or 32 cores just because these machines do more than one thing at a time. But I'd much rather see them speed the cores up than endlessly multiply the number of them. There is a *lot* of room left to do this. Three D architectures offer more connectivity than is currently being used, and both the number and type of one-cycle instructions within a CPU can be increased until the summary is "all of 'em", which I doubt they're going to ever get to, orthogonality can be increased until again, the answer is that all instructions are available to the same degree for all registers and addressing modes no matter what. Compilers like broad orthogonality (and so do assembly programmers, not that there are a lot of us left.)

      If CPU designers run off towards the horizon making many-core designs, what if no significant number of people follow them there? Which... frankly... pretty much seems to be the case. I've an 8-core machine, and just about the only things that actually use those cores together are the easy ones: graphics and waveform encoding/decoding. Aperture sure enough uses all my cores in a well-balanced fashion and can build a JPEG in a snap; but that's a far cry from my web browser doing the same thing while trying to render a page.

      I'm just saying that the direction the CPU folks have chosen lately doesn't have to be the direction we actually end up going, and there are points in favor of this as the best choice.

      Just some thoughts.

      • by Zironic ( 1112127 ) on Friday December 05, 2008 @08:58PM (#26009645)

        Well, the problem is that no matter how much you bash an algorithm with a functional language you can't magically make a sequential task into a parallel one.

        • by Chandon Seldon ( 43083 ) on Friday December 05, 2008 @09:11PM (#26009753) Homepage

          Well, the problem is that no matter how much you bash an algorithm with a functional language you can't magically make a sequential task into a parallel one.

          Thing is, you probably have a parallel task that was already bashed into a sequential one.

          Most real-world problems are parallel problems. Even the ones that aren't (say... compiling a file in C) you can usually run a lot of instances of in parallel.

          • by triffid_98 ( 899609 ) * on Friday December 05, 2008 @10:35PM (#26010313)
            The reason this happens in the real world is because you generally need lots of X, be it a road, or babies, or whatever. If you just need one (unique inputs to create a singular unique output) often you can't optimize it using multiple cores, though I suppose you could still do the equivalent of superscalar branch prediction and result tables(for fixed input sets) the like.

            More CPU horsepower is the obvious choice for non-bus limited operations, but things are starting to get expensive there, so I welcome a few extra cores. The real question is if Intel and AMD will save some cash from their MMC(massive multi-core) projects and deliver a more sensible number of faster cores. You just can't depend on a given user's programs being set up to run efficiently in parallel.

            Thing is, you probably have a parallel task that was already bashed into a sequential one. Most real-world problems are parallel problems. Even the ones that aren't (say... compiling a file in C) you can usually run a lot of instances of in parallel.

            • by Chandon Seldon ( 43083 ) on Friday December 05, 2008 @10:54PM (#26010421) Homepage

              The real question is if Intel and AMD will save some cash from their MMC(massive multi-core) projects and deliver a more sensible number of faster cores. You just can't depend on a given user's programs being set up to run efficiently in parallel.

              I'd much rather have 64 fast cores than 16 slightly faster (but horribly power-inefficient) cores, and that's really the tradeoff that you're talking about. All of the reasonable ways of throwing transistors at getting faster straight-line code execution have already happened. Hell, even the unreasonable ones have been implemented, like fast 64-bit division units.

              Intel and AMD have the choice of releasing dual-core processors that are 5-10% faster than last years, or they can release 4/6 core processors for about the same transistor budget. The multi-core processors are better for almost everyone - there's no way to get a 5x speedup out of a 10% faster processor.

              • by plasmacutter ( 901737 ) on Saturday December 06, 2008 @03:29AM (#26011529)

                All of the reasonable ways of throwing transistors at getting faster straight-line code execution have already happened. Hell, even the unreasonable ones have been implemented, like fast 64-bit division units.

                You, and the chipmakers, have apparently become stale.

                There have been claims like this made throughout history. The patent office was closed because of this, bill gates once declared a maximum filesize we'd ever need, and the list goes on and on.

                If today's major chipmakers are too lazy and uncreative to come up with new ideas, then academics and entrepreneurs will come and eat their lunch.

                • by Chandon Seldon ( 43083 ) on Saturday December 06, 2008 @01:29PM (#26013889) Homepage

                  There have been claims like this made throughout history. The patent office was closed because of this, bill gates once declared a maximum filesize we'd ever need, and the list goes on and on.

                  Sorry, I didn't mean to imply that we had come to the end of invention, even in a small area.

                  But all of the effective techniques *that we know about* have been implemented, and the chip makers have been banging their heads on the wall for years trying to come up with new ones. They went to multi-core processors as an absolute last resort, and 5 years later it's probably time for most programmers to accept that and learn to target multi-core platforms.

            • by Hooya ( 518216 ) on Friday December 05, 2008 @10:54PM (#26010425) Homepage

              The way I look at it is that we are resigned to do only certain things with a computer since, up until now, the computers we have created are only good at a certain class of problems. They suck donkey balls on most of other interesting things that are immensely useful. Take optimization problems - there is an insane amount of applications that we currently don't think of since, like i said before, we've resigned our hopes in being able to tackle those.

              For example, I would love to get parallel computations figure out my 'optimal' tax returns. Have my GPS calculate optimal routes - the routes I get now are pretty crappy etc.

              My point to all this is that most of the problems that look like they are one-input-one-output aren't really that. It's just that over the last 50 or so years, we've learned to model them as such out of sheer necessity.

              • Re: (Score:3, Insightful)

                by ZosX ( 517789 )

                My thoughts exactly. I can think of a multitude of possible applications that we have yet to tackle simply single core processors were not up to the task of computing the large data sets efficiently.
                So many applications have been relegated to high-performance computing. Such as weather prediction, 3d rendering, chaos math, simulation, etc. Software has been been outpaced to what hardware is capable of (games notwithstanding) for some time now. Even this single core athlong64 3000 i'm using is about 100x fa

      • by danwesnor ( 896499 ) on Friday December 05, 2008 @08:58PM (#26009651)

        But I'd much rather see them speed the cores up than endlessly multiply the number of them.

        Your idea is not feasible because it screws up too many marketing campaigns. Please revise your idea and run it through sales before submission to management.

        • by cleatsupkeep ( 1132585 ) on Friday December 05, 2008 @09:47PM (#26010049) Homepage

          Actually, the reason for this is because of the heat consumption. As the power of a chip grows, the heat consumption grows much faster, and more cores are a much better way to get more speed with less power consumption and heat.

          • Re: (Score:3, Insightful)

            by mechsoph ( 716782 )

            As the power of a chip grows, the heat consumption grows much faster

            That doesn't make sense. The power used by the chip will exactly equal the heat generated by the chip due to the law of conservation of energy.

            more cores are a much better way to get more speed with less power consumption and heat.

            Ok, that sounds reasonable.

            • Re: (Score:3, Informative)

              by TheKidWho ( 705796 )

              It makes perfect sense if you consider that when he says "power" he wasn't referring to power consumption, but processing speed.

              aka Heat consumption increases faster than power does.

      • by exley ( 221867 ) on Friday December 05, 2008 @10:20PM (#26010229) Homepage

        Another question you might ask yourself is, are you going to let the CPU designers push you into a programming paradigm you are not effective in?

        This to me sounds like laziness. "But parallel programming is HARD!"

        But I'd much rather see them speed the cores up than endlessly multiply the number of them. There is a *lot* of room left to do this.

        Please elaborate on this further, because what you wrote that follows is all rather vague to me. Making cores faster means higher clock speeds and/or improved architectures. As far as clock speeds go, apparently, chip designers feel like they are running up against physical limitations that are making this difficult. And when it comes to improved architectures, what quite possibly the #1 thing that has been done over the years to improve performance? Increase parallelism. Pipelining, superscalar architectures, multi-threaded single cores, VLIW, etc.

        • by lysergic.acid ( 845423 ) on Friday December 05, 2008 @11:30PM (#26010647) Homepage

          personally, i think in terms of commodity computing, we don't really need to squeeze any more power out of the CPU than we've already got. use fully pipelined superscalar architectures, perhaps multithreaded dual or quad cores (for high-end workstations) and VLIW to maximize ILP efficiency. even at current processor speeds, 99% of the applications people (especially casual computer users) use have bottlenecks elsewhere (like memory & disk I/O speeds, internet bandwidth, user response time, etc.).

          for the really resource-intensive stuff, like image/video/audio processing, cryptography, 3D graphics, CAD/engineering applications, scientific modeling, processing/manipulating financial data, etc. you would be much better off using a specialized dedicated vector coprocessor (as opposed to a general-purpose scalar processor that commodity CPUs tend to be). this way you can have a relatively low-power (and low clock rate) CPU for processing common SISD instructions that constitute 99% of all computing tasks, greatly cutting the cost of consumer and pro-sumer systems. and by using highly specialized coprocessors to handle the heavy lifting, those applications can be processed more efficiently while using less power (and at lower clock speeds) than trying to get a general-purpose scalar CPU to do the same work.

          that is why GPGPUs are generating so much interest these days. it just so happens that most of the really processor-intensive applications consumers run greatly benefit from stream processing. game developers have long taken advantage of dedicated vector coprocessors with highly-specialized instruction sets and architecture made specifically for 3D gaming. DSPs with specialized architectures are also commonly used for hardware-accelerated video encoding, audio processing, etc. and now companies like Adobe are also seeing the advantages to using specialized vector coprocessors for their resource-intensive applications rather than having the CPU handle it.

          and, honestly, how many different kinds of processor-intensive applications do most users run on a regular basis? if you're a graphic designer, your main processing power concern is only in relation to 2D/3D graphics. if you're an audio engineer or musician, then you're only going to use audio-related resource-intensive software. likewise, if you're a cryptographer/cryptanalyst, you probably won't ever run any audio editing software or 2D graphics software. therefore, it makes sense to pair moderately powered general-purpose scalar CPUs up with a powerful & highly specialized vector coprocessor like a GPU/DSP/Stream Processor/etc.

        • Re: (Score:3, Insightful)

          by dhasenan ( 758719 )

          This to me sounds like laziness. "But parallel programming is HARD!"

          That's probably a better argument than fighting the CPU designers. If parallel programming is hard, it's more expensive to create parallel programs. They'll take longer to write. It'll take longer to implement new features for them. It'll take more money to maintain them. All that seems like a good reason to avoid parallel programming.

          On the other hand, if someone comes up with a new parallel programming paradigm that's slightly more difficult than procedural/object-oriented programming, but offers these be

      • by omb ( 759389 ) on Friday December 05, 2008 @10:53PM (#26010407)
        First, the scope for mono-processors is now strictly limited and we _will_not_ see x2/18 months again, there may be x10 to x100 possible within this basic technology but thats just a few Moor's law cycles; second, the (commercial) problems, as described elsewhere, involve the solution of partial differtial (Heat, Navier Stokes, Elasticity) Equations or Stochastic Simulations --- all of which are inherently clusterable, but not gridable).

        It is the cache coherency and memory bandwidth problems with existing architectures that are the problem. We need better low latency data transfer and significant improvement in auto-parallelism technology in compilers.

        It should be clear that there has been very little serious investment in basic compiler technology and that is now needed. Academics have realised this but it takes time. The bandwidth issues are solvable else-when with more transistors.

        Finally, we have a variety of programming paradigms OO, Functional & procedural and more each of which has a problem niche.

        One thing we will certainly have to get away fom is the idea that 'legacy' code can carelessly be re-written in the flavor of month interpreted language eg Java, C#, Perl, Python or Ruby. You can write 95% of your code in a programmet friendly language. But the critical sections need to be in C, FORTRAN or Assembler and need to be very carefully optimized. That can give you x100 on the same architecture.
        • Re: (Score:3, Insightful)

          by mechsoph ( 716782 )

          You can write 95% of your code in a programmet friendly language. But the critical sections need to be in C, FORTRAN or Assembler and need to be very carefully optimized. That can give you x100 on the same architecture.

          Your completely right that Perl, Python, and Ruby are pigs. Java looks a little like it was dropped on its head as a child. I don't know much about C#; it tastes too much like MSWindows for me to touch it.

          While different programmers have different tastes in friends, modern functional lang

        • Re: (Score:3, Interesting)

          Finally, we have a variety of programming paradigms OO, Functional & procedural and more each of which has a problem niche.

          We have also seen the merging of those paradigms over the years. Every mainstream language today, with the exception of C, has some form of OOP. Every mainstream language either has or is getting (e.g. C++0x lambdas) first-class function values - with the unfortunate (for Java) exception of Java, now that Neal Gafter has moved from Google to Microsoft. Many languages are introducing

      • Re: (Score:3, Interesting)

        You're cherry picking your data there (compilers, etc). To see what's out there, we must look at want is commonly done and if those things can benefit from parallel processing (and I see lots and lots of places, including the browser, where things could go parallel).

        When we do that, we notice that what goes on in the gaming industry, soon becomes standard everywhere else. And both modern platforms, the PS3 and XBox 360 (I'm not including the Wii as Nintendo has different goals that having bleeding edge te

      • Apologies (honest) for the sarcasm, but do really think that if the CPU vendors had any useful room left to increase sequential processing performance that they wouldn't use it? Are 3D layouts out of the research labs yet? Are production fabs for 3D chips feasible?

        I.e. what basis is there to think CPU designers have any choice (for the mid-term) but to spend the transistor income from Moore's Law on parallel processing?

  • by marnues ( 906739 ) on Friday December 05, 2008 @08:28PM (#26009361)

    When you move to FP, all your algorithms break

    If moving to a functional programming language breaks your algorithms, then you are somehow doing it wrong. That line doesn't even make sense to me. Algorithms are mathematical constructs that have nothing to do with programming paradigm. Assuming the language is Turing complete, how is that even possible?

    • by marnues ( 906739 ) on Friday December 05, 2008 @08:33PM (#26009395)
      Christ on a crutch...I didn't even pick up on it the first time around. How can Moore's Law ever be a software issue? I can accept that most people don't care about transistor count, but saying it can somehow become a software issue is just too many steps removed from the original meaning. I love functional programming languages, but this article is hurting my brain.
      • by reginaldo ( 1412879 ) on Friday December 05, 2008 @08:55PM (#26009611)
        Moore's Law becomes a software issue when we need to change our coding paradigm to use all of the cores on the chip. The hardware holds up it's end of the deal, but we need to develop software that utilizes the hardware correctly.

        That's where FP comes into play, as it allows developer's to develop heavily parallelized code that is also safe and fault-tolerant.
        • by jlarocco ( 851450 ) on Friday December 05, 2008 @10:22PM (#26010251) Homepage

          Moore's Law becomes a software issue when we need to change our coding paradigm to use all of the cores on the chip.

          Moore's law states that the number of transistors on a chip will double every two years. By definition it's a hardware problem.

          Obviously, utilizing those transistors is a software problem, but Moore's law doesn't say anything about that.

          The article sucks. The author seems to know FP about as well as he knows Moore's law.

          • Re: (Score:3, Insightful)

            I think Moore's law somewhat implies that those transistors will be used. It suggests an increase in computational power to parallel the number of transistors, and if the transistors go unused, they effectively do not exist.

            By the same token, I'll believe that FP is the way of the future when I see it. Don't get me wrong - in a lot of ways I like functional programming. Recursion is so much more straightforward than iteration. However, tasks that can be completed with an iterative polynomial-time algorithm

            • by jbolden ( 176878 ) on Saturday December 06, 2008 @01:13AM (#26011035) Homepage

              It isn't recursion vs. iteration but rather pure vs. environmental (i.e. mutable variables) that make parallelism safe.

            • Re: (Score:3, Informative)

              by Stephan Schulz ( 948 )

              However, tasks that can be completed with an iterative polynomial-time algorithm often end up exponential when recursive. Of course, a bit of tail recursion and you can deal with that, but some things aren't easily tail recursed.

              That is not so. Recursion is more general that iteration, and if you make use of that extra expressive power, you can get higher-complexity algorithms. But then these algorithms would be just as complex if implemented with iteration and an explicit stack, or whatever data structure is needed.

              The canonical example where iteration is linear and naive recursion is exponential is the computation of the Fibonacci series. But the simple iterative algorithm is by no means obvious - or, on the other hand, a linea

      • by Anonymous Coward on Friday December 05, 2008 @09:05PM (#26009715)

        It's true that Moore actually said the transistor count doubles every 18 months. However, for a long time, an immediate corollary of Moore's Law was that software doubled in speed every 18 months, which is essentially why Moore's Law as important. I think what they author is trying to say is that in order for this corollary to remain true, people must learn to write parallel software. It is much easier for a compiler to get an FP running in parallel than a sequential program (SP) running in parallel. Hence, those who can write in FP languages will be better suited to write the software of tomorrow than whose who can only write in SP languages.

      • by DiegoBravo ( 324012 ) on Friday December 05, 2008 @09:25PM (#26009855) Journal

        >> How can Moore's Law ever be a software issue?

        In a sense, it can be: if we start rewriting Java/C#/VB apps in assembler, I'm pretty sure the performance will at least double each year, and we can forget about those cores for good.

      • by CustomDesigned ( 250089 ) <stuart@gathman.org> on Friday December 05, 2008 @09:31PM (#26009909) Homepage Journal

        Pure functional programming removes all side effects. This make memory optimization (critical to efficient multiprocessing) much easier. It also makes garbage collection easier - but that is pretty much canceled by an increase in garbage.

        But beyond functional programming is thermodynamic computing. This starts with functional, but requires all operations to be reversible. Ideally, the total electrons are conserved - you can never clear a bit - just exchange bits (and of course more complex operations like add, mul, etc - but all reversible and charge conserving). Of course real hardware will still need to make up for losses, but power consumption and heat go way down.

        The fascinating thing is that thermodynamic programming requires a pool of known 0 bits and known 1 bits. As the algorithm progresses, you can't just throw away results you aren't interested in - you collect the unwanted results in an entropy pool. Eventually, you run out of known bits, and need to clear some entropy bits in order to continue. This takes lots more power (like erasing a flash block). The analogy to real world entropy is striking.

        • by Gorobei ( 127755 ) on Friday December 05, 2008 @09:59PM (#26010111)

          It is sad this was moderated "funny" rather than "interesting"

      • Re: (Score:3, Insightful)

        by FredMenace ( 835698 )

        The idea is that it's still easy to add more transistors, but no longer easy to make them run faster. That is why they are moving to multiple cores - they can add three (or even 15) cores far more easily (and using much less energy and putting off much less heat) than they can make a single core run at twice the clock rate.

        Moore's law still exists in hardware, but it's manifested in a different way than it had been until a few years ago, due to physical limitations that don't appear to be going away any tim

    • by sexconker ( 1179573 ) on Friday December 05, 2008 @08:47PM (#26009539)

      While algorithms won't break, you'll certainly have to rewrite a lot of them to take advantage of multiple processors.

      This problem is not new.
      The solutions are out there, and are also not new.

      Article is pure shit.

    • Re: (Score:3, Informative)

      If moving to a functional programming language breaks your algorithms, then you are somehow doing it wrong.

      Easy. Pure functional programming doesn't permit side-effects. Algorithms that perform I/O at various points in the algorithm can't easily be expressed in languages like that.

      Also, although some popular functional languages like ML and Erlang have hacks to get around this, purely functional programming doesn't like a function modify global state. Without those hacks in the language, algorithms that

    • by sentientbrendan ( 316150 ) on Friday December 05, 2008 @11:10PM (#26010529)

      >>When you move to FP, all your algorithms break

      >If moving to a functional programming language
      >breaks your algorithms, then you are somehow
      >doing it wrong. That line doesn't even make sense
      >to me. Algorithms are mathematical constructs
      >that have nothing to do with programming
      >paradigm. Assuming the language is Turing
      >complete, how is that even possible?

      You are confused about the definition of an algorithm, and the significance of Turing completeness.

      First of all, an algorithm is a *way* of doing things with an associated complexity specification (a mathematical description of how long it will take to run often denoted like O(n)).

      Two turing equivalent machines don't necessarily support the same algorithms, although they will always have *equivalent* algorithms that get the same job done. HOWEVER, those algorithms don't necessarily have the same complexity. For instance, on turing machine A a sort might be done in O(n^2) while on turing machine B a sort can only be done in O(n^3).

      To be functional means to be stateless. If you don't have state, then all sorts of algorithms become much more expensive. Notably, it's impossible to do a quicksort in a functional language, although other less efficient sorts may be done. Some people respond to that by saying that you can just buy a faster computer if you want to run functional algorithms; however, anyone with a decent computer science education knows that this can't solve differences in assymtotic complexity.

      NOTE: quicksort (which cannot be done functionally) does not have better worst case (big O notation) complexity than mergesort (with can be done functionally), but it does have best average case and takes advantage of the underlying machine implementation much better. In some ways it is a bad example, but most people are familiar with sorting, whereas few people are familiar with dynamic algorithms.

      The reason that functional programming languages exists goes back to Church and Turing. Church invented lambda calculus, and Turing invented Turing machines. Both are computationally equivalent in their power.

      Turing machines have state, and are essentially a description of a hypothetical machine. Lambda calculus, is well, a calculus. It is functional in nature and has no state.

      Not surprisingly, real world computers look more like turing machines than they do Lambda calculus evaluating machines. Also, virtually all programming languages are built around state manipulation, since that's what the underlying hardware has to do.

      The idea of a functional programming language is to emulate the lambda calculus on a rough approximation of a Turing machine. Technically it's possible for any Turing equivalent machine to emulate any other. However, since the two machines are so different, this makes things dog slow. Again, faster computers don't solve this problem because there is an assymtotic difference in complexity, not a constant factor difference.

  • by Zironic ( 1112127 ) on Friday December 05, 2008 @08:29PM (#26009363)

    I don't see why an algorithm would break just because you're changing language type, the whole point of an algorithm is that it's programming language independent.

    • by koutbo6 ( 1134545 ) on Friday December 05, 2008 @08:36PM (#26009425)
      With functional programming languages make a rather restrictive assumption, and that is all variables are immutable.
      This is why functional programs are more suited for concurrency, and this is why your sequential algorithms will fail to work.
      • Re: (Score:3, Informative)

        ..all variables are immutable.

        You mean to say that in a functional programming language, variables aren't? A paradox, a paradox, a most delightful paradox!

        • by Chandon Seldon ( 43083 ) on Friday December 05, 2008 @09:03PM (#26009699) Homepage

          You mean to say that in a functional programming language, variables aren't? A paradox, a paradox, a most delightful paradox!

          Functional variables are like mathematic variables - they're variable in that you may not have discovered their value yet, but once you discover their value it stays the same for the current instance of the problem. For the next instance of the problem (i.e. the next call to the function), you're talking about a different set of variables that potentially have different values.

  • Amdahl's Law (Score:5, Insightful)

    by Mongoose Disciple ( 722373 ) on Friday December 05, 2008 @08:29PM (#26009367)

    Question is, how realistic is that?

    Amdahl's Law also tells us tells us that the amount that parallelization can speed something up is ultimately limited by the parts that can't be done in parallel.

  • Scheme (Score:5, Funny)

    by Rinisari ( 521266 ) * on Friday December 05, 2008 @08:31PM (#26009377) Homepage Journal

    (have I (feeling ((become popular Scheme) again)))

  • by cats-paw ( 34890 ) on Friday December 05, 2008 @08:32PM (#26009385) Homepage

    It's been said in the comments on slashdot many times. Learning functional programming techniques will improve your programming skills. There are many good functional languages out there, and many have imperative features for ease of transition. No functional will not solve all of your problems, but it will give you that most valuable of all lessons, how to think about a problem _differently_.

    You don't need an excuse, start today.

    • Re: (Score:3, Interesting)

      by Duhavid ( 677874 )

      What language would you recommend for an OO programmer to start with?

      • by j1m+5n0w ( 749199 ) on Friday December 05, 2008 @09:31PM (#26009905) Homepage Journal

        I would say Haskell, but I think that's the language everyone should learn, so I'm biased. The typeclass system provides for some of the functionality of object oriented programming.

        If Haskell scares you, Ocaml is another good choice. It's a multi-paradigm language with an emphasis on functional programming, but it also allows you to use mutable state wherever you like (whether this is a good thing or not is a matter of some debate). It even has some traditional object-oriented programming features, but they tend not to get used much in practice.

        If you care about performance, they both have decent native-code compilers. My impression is that Ocaml is a bit faster for single-core tasks, but Haskell's parallel programming features are much better.

    • by LaskoVortex ( 1153471 ) on Friday December 05, 2008 @08:39PM (#26009465)

      You don't need an excuse, start today.

      The excuse is: it's fun. But if you do start, choose the right language for the job. Python for example seems good for fp, but was not designed for the task. Don't choose a language that simply supports functional programming. Choose a language that was designed specifically for functional programming. You'll be happier in the long run when you don't run into limitations of the language you choose.

      My 2c.

  • by Janek Kozicki ( 722688 ) on Friday December 05, 2008 @08:34PM (#26009407) Journal

    This reminds me about the /. article "Twenty Years of Dijkstra's Cruelty" [slashdot.org], just a few days ago.

    Problem boils down to fact that programming is in fact a very advanced calculus. And writing a program is 'deriving' it. As in reaching a correct formula with a proof that it's correct. That's how software should be written anyways. And functional programming will only make it *simpler*, not harder.

  • Question (Score:3, Interesting)

    by Anonymous Coward on Friday December 05, 2008 @08:35PM (#26009411)

    So a quick question before I go and master FP...does the compiler automatically compile the code that can be done in parallel in the proper "way", or do I have to specify something?

    Also, if I rewrote an app written in an imperative language to a FP one like Haskell, would I see a that much of a difference on a multi-core processor?

  • Moore's Law? (Score:3, Insightful)

    by DragonWriter ( 970822 ) on Friday December 05, 2008 @08:39PM (#26009447)

    Chipmakers have essentially said that the job of enforcing Moore's Law is now a software problem.

    How is maintaining the rate of increase in the number of transistors that can be economically placed on an integrated circuit a software problem?

  • Function is easy (Score:4, Insightful)

    by TheRaven64 ( 641858 ) on Friday December 05, 2008 @08:41PM (#26009477) Journal

    The biggest problem with functional languages tends to be their type systems (I'm looking at you, Haskell). A functional language with a nice type system, like Erlang, is easy to pick up. And the example I picked totally at random, Erlang, also happens to have CSP primitives in the language, which makes parallel programming trivial. I've written code in it and then deployed it on a 64-processor machine and watched it nicely distribute my code over all 64 processors. If you program in a CSP style (which is easy) then your code will exhibit 1000-way parallelism or more and so will trivially take advantage of up to this many processes.

    And, actually, object orientation is a good option too. Alan Kay, who defined coined term, defined it as 'simple computers [objects] communicating via message passing' - sounds a lot like CSP, no? The main difference is that OO is usually implemented with synchronous message passing, but you can implement it with asynchronous messaging (actor model) then you have something almost identical to CSP. You can also add this implicitly with futures. I've done this in Objective-C for Etoile. Just send an object an -inNewThread message and any subsequent message you send to it is passed via a lockless ring buffer to the other thread and executed. We use it in our music jukebox app, for example, to run the decode in a separate thread from the UI. Implementing it in the language, rather than the library, means you can do it more efficiently, so this by no means replaces Actalk or Erlang in the general case, but modern processors are fast serial processors so it makes sense to program much larger chunks of serial code on these systems than Erlang or Actalk encourage.

  • Suggested reading. (Score:5, Informative)

    by DoofusOfDeath ( 636671 ) on Friday December 05, 2008 @08:50PM (#26009567)

    I've recently gotten into FP. I started with Erlang and then branched into ML and Haskell. In case you're interested, here are the best books I've encountered for each language:

    Programming Erlang [amazon.com]

    Programming Haskell [amazon.com]

    ML for the Working Programmer [amazon.com]

    Also, I'd definitely recommend starting with Erlang, because the Programming Erlang book made for a very easy introduction to functional programming.

  • by That's Unpossible! ( 722232 ) on Friday December 05, 2008 @09:03PM (#26009689)

    A. Many programmers start writing or re-writing their code in functional programming languages.

    or

    B. Programmers continue writing to their platform of choice, e.g. .NET, Java, etc., and the guys writing the virtual machines do the heavy-lifting, making the VM execute more efficiently with multi-cores?

    I'll go with B.

    Apple is already proving this. Mac OS X Snow Leopard will have a lot of this built-in. Read about "Grand Central."

    • Re: (Score:3, Interesting)

      Corollary to B:

      Visual Studio 2010 will make parallel and multithreaded programming easier to accomplish. Essentially, instead of just
      for(x=0;x1000;x++) dosomething();
      you'll have
      parallel_for(x=0;x1000;x++) dosomething();
      and 2-1000 threads will kick off at once. I'm sure there can be thread pooling, locking, etc., but if the code is segmented well enough then parallel programming for 32 cores is probably not that traumatic for 90% of the world's code needs. Get into some high performance st

  • by Stuntmonkey ( 557875 ) on Friday December 05, 2008 @09:03PM (#26009697)

    Look at the table of contents of this BYTE magazine from 1985 [devili.iki.fi]. In a nutshell it said the same thing as this article: Functional languages are the great hope for solving the parallel programming problem. Only then the languages were different: Hope, Linda, and Prolog were among them.

    My response back then was to get excited about FP. My response now is: Where is the proof? Can anyone name a single instance where a functional paradigm has yielded the best measured performance on a parallel computing problem? In other words, take the best functional programmers in the world, and pair them up with the best tools in existence. Can they actually create something superior, on any problem running on any hardware? This is a very low bar, but until it's demonstrated FP will be confined mostly to the lab.

    IMHO the path forward is to treat parallel programming like just another optimization. As we know, the vast majority of your code doesn't need to run fast, and you get most of the performance benefit by optimizing small bits of code that really matter. I suspect the same thing will happen with parallel programming: In a given application only a few areas will benefit much from parallelism, and these tasks will probably be very similar across applications. Graphics rendering, large matrix math, video encoding/decoding, and speech recognition would be examples. People will treat these as special cases, and either develop special-purpose hardware (e.g., GPUs), or libraries that encapsulate the nitty-gritty details. The interesting question to me is what is the best runtime model to support this.

    • Re: (Score:3, Interesting)

      by gomoX ( 618462 )

      Functional Programming is not a buzzword that is supposed to be better at paralellism. When coding in a stateless fashion (what FP is all about), function reduction can be split transparently across many computers. There are no locks, no funny semaphores and mutexes, no deadlocks, no nothing. It just works, because of its uncoupled nature of "no side effects, ever".

      There is one kind of early optimization that is not premature: architectural optimization. If you design your whole system to be synchronous, yo

      • by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Friday December 05, 2008 @10:18PM (#26010213)

        Auto-parallelization of functional programs has been proposed for decades now, and every attempt has fallen on its face as the overhead has killed any gains. Current parallel FP research isn't even putting that much effort into auto-parallelization, because most PLs researchers consider it a dead end---taking a random FP and evaluating all its thunks in parallel as futures, or some similar mechanism, is not going to solve the problem.

        Instead, most of the current research is in programmer-level primitives for designing and specifying inherently parallel algorithms. There is some of this in both the FP and non-FP communities.

    • by arevos ( 659374 ) on Friday December 05, 2008 @09:54PM (#26010073) Homepage

      My response back then was to get excited about FP. My response now is: Where is the proof?

      Whether functional programming is the best paradigm to use for parallel computing is undecided. But it does have a couple of advantages over imperative programming.

      First, imperative programming specifies the order of evaluation, whilst functional programming does not. In Haskell, for instance, an expression can essentially be evaluated in any order. In Java, evaluation is strictly sequential; you have to evaluate line 1 before line 2.

      Second, imperative languages like Java favour mutable data, whilst functional languages like Haskell favour immutable data structures. Mutability is the bane of parallel programming, because you have to have all sorts of locks and constraints to keep your data consistent between threads. Programming languages that do not allow mutable data don't have this problem.

    • The difference between 1985 and today is the end of the free lunch in clock rates. 1985 was roughly the middle of the last great clock rate stall - during the shift from minicomputers to micros, Moore's Law expressed itself not as speedup, but as decrease in cost per bit/gate.

      Then we entered the great die-shrink and clock rate increase era, where for about 20 years processors really did get twice as fast every 12-18 months. Why expend the effort to bury a problem in parallel hardware when you can see fast

  • example (Score:5, Interesting)

    by bcrowell ( 177657 ) on Friday December 05, 2008 @09:06PM (#26009721) Homepage

    As an example of the learning curve, I wanted to learn a little OCaml. I played around with this [inria.fr] insertion sort example. I used it to sort a list of 10,000 integers, and it took 10 seconds, versus <1 second in C with linked lists. Not too horrible. But changing it to 100,000 integers made it die with a stack overflow, so I'm guessing that its memory use goes like n^2. However, it's not at all obvious to me from looking at the code that this would be the case. I think if I wanted to do a lot of OCaml programming I'd have to develop "FP Eye for the Straight Guy." Probably if you wanted to make it perform better on big arrays you'd want to make it tail-recursive, but it's not totally obvious to me from the code that it's *not* tail-recursive; although the recursive call isn't the very last line of code in the function, it is the very last thing in its clause...?

    I know of at least one well known OSS project in Haskell, written by a very smart guy, that is really struggling with performance issues. I wonder whether bad performance is to FP as null-pointer bugs are to C. Sure, a sufficiently skilled programmer should theoretically never write code that will dereference a null pointer, but nevertheless my Ubuntu system needs a dozen security patches every month, many of which are due to null pointers, buffer overflows, etc.

  • don't bother (Score:3, Interesting)

    by speedtux ( 1307149 ) on Saturday December 06, 2008 @09:57AM (#26012707)

    When people say that they want to learn "functional programming", they usually mean some functional programming language. Common functional programming languages (OCAML, SML, Haskell, Lisp, Scheme, etc.) are duds when it comes to performance: in principle, they encourage parallel-friendly programming, in practice, they get many basic language design and performance issues wrong.

    It's also silly to rewrite large amounts of code when usually only a small percentage of the code is performance critical. If you need to speed up C code to run fast on a multicore machine, optimize the inner loops. All the "functional programming" you need for that is available right within C in the form of OpenMP. You'll understand OpenMP a little better if you have used a "real" functional programming language, but the concepts aren't rocket science and you can do just fine as a C programmer.

    • Re: (Score:3, Interesting)

      by moro_666 ( 414422 )

      i kind of agree, don't bother about fp.

      the thing is, our applications are today running hundreds of processes in a box, even if you get 256 cores on a cpu, we'll still keep them all busy without a change.

      who ever wrote the logic for the article was probably on coke or smth, seriously, no company with over 50 employees (and hence reasonable salaries for non-owners), will migrate to any functional language any time soon, they can't afford it, they don't have time for it and most of all, they don't want a soft

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...