Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software The Internet

World's "Fastest" Small Web Server Released, Based On LISP 502

Cougem writes "John Fremlin has released what he believes to be the worlds fastest webserver for small dynamic content, teepeedee2. It is written entirely in LISP, the world's second oldest high-level programming language. He gave a talk at the Tokyo Linux Users Group last year, with benchmarks, which he says demonstrate that 'functional programming languages can beat C.' Imagine a small alternative to Ruby on rails, supporting the development of any web application, but much faster."
This discussion has been archived. No new comments can be posted.

World's "Fastest" Small Web Server Released, Based On LISP

Comments Filter:
  • by Anonymous Coward on Monday May 25, 2009 @01:25PM (#28085301)

    Dunno about you, buddy, but I find LISP a lot easier to read and write than all the C-like languages (although pure C itself is OK when it sticks to what it's good at - being a set of macros for assembler when programming systems-level stuff).

  • An alternative (Score:3, Interesting)

    by Ed Avis ( 5917 ) <ed@membled.com> on Monday May 25, 2009 @01:26PM (#28085305) Homepage

    You might also be interested in SMLserver [smlserver.org] which embeds Standard ML into Apache, and apparently is pretty fast.

  • by Anonymous Coward on Monday May 25, 2009 @01:37PM (#28085431)

    And Java is faster too!

    (rolls eyes)

    Different tools are good for various solving various problems.

    Yeah, I know certain library routines in certain languages are better than others.

    Interpreted languages, in general, are not faster than compiled languages. Period.

    This "faster than C" canard keeps getting trotted out and shot down every time.

    Well, there is one language faster: assembly.

  • by bill_kress ( 99356 ) on Monday May 25, 2009 @01:52PM (#28085607)

    Actually, there are things you can do with Java/C# that you can't do with C period.

    C (and even assembly) can't realize that the same inputs to a routine always cause the same output, and therefore cache the return value and just return it (not without a lot of buggy, custom code anyway). (I'm not saying Java/C# DO this, they may--I understand they were working on it... But just trying to give you an idea of what CAN be done)

    Java/C# do compile to assembly by the way--only the parts that are run often. And the compile step can know a lot more about the runtime configuration of the system.

    Currently these kind of optimizations don't actually have Java running faster than C, currently it runs about 1/2 the speed on average and significantly less for some operations. A very few operations can be faster.

    The real issue is, when Java 7, 8 or 9 comes out, ALL java code ever produced will run faster without touching the compiled system.

    All I'm really doing is offering a counter to your discussion about assembly being clearly faster--Logically I assumed it would be too--just makes sense--but I assumed wrong.

  • by epiphani ( 254981 ) <epiphani&dal,net> on Monday May 25, 2009 @01:53PM (#28085625)

    No, definitely not in speed.

    He wrote a LISP-based memory-only webserver that could respond to requests roughly 10% faster than lighttpd with php. I promise you, if I wrote a C implementation that performed only the functionality he implemented, it would blow it out of the water. In fact, before anyone else comes out with the "X is faster than C!" claim, I'll leave the challenge out there:

    I will prove that anything written in a higher-level language will not be as fast as my implementation of it in C. I leave this challenge out to anyone to take. (*)

    Seriously, I'm sick of this crap. Bring it on.

    (*) Caveat: It must be a small challenge involving a relatively simple task. I don't have a lot of time to waste on this.

  • Based on my theoretical understanding of how computers work, I though HTTP daemon performance depended mostly on

    • I/O performance, much of which is controlled by the kernel (in particular the file system).
    • A good caching strategy (to minimize I/O), again done by the kernel.
    • Good networking performance, controlled by the networking stack in the kernel.
    • Database performance, controlled by the RMS-DB, BDSM(R) or whatever it is.
    • Process spawing speed (for CGI), again controlled by the kernel.

    Would someone care to correct me?

    Note that TFA (well, the slideshow) measures performance in requests per second. That's a very useful measure, but it's compared to Ruby (Mongrel?) and PHP (Apache?). I'm not sure what that comparison means. Does Apache not support lisp, or only as CGI?

    Is there something stopping Apache from being sped up? Is he measuring the performance of LISP, or the performance of a HTTP daemon?

    I'm a bit confused...

  • by Morphine007 ( 207082 ) on Monday May 25, 2009 @02:09PM (#28085805)
    Hi, I'm from Canada. We're those soft-spoken guys to the North of you who were used as shock troops in both those wars [wikipedia.org] you mention. We did the job when no one else could [wikipedia.org].

    Your current soldiers are solid. Your previous soldiers were solid. This isn't a pissing contest, but when it comes to having historically solid troops I think we, at least, have earned the right to reflect on the sacrifices of our respective troops on different days * [wikipedia.org]. Yours on your day, and mine on my day. Which is to say, we know it's memorial day. Your soldiers are and have been heroes, but keep your holiday to yourselves. Just as the rest of us keep ours to ourselves.

    * - it's worth noting (though I can't find the citation) that the method by which the cdns held kapyong against the 3-5:1 odds was by calling down artillery on their own position

  • by Onyma ( 1018104 ) on Monday May 25, 2009 @02:11PM (#28085825)
    I first learned LISP using the watered down version included in AutoCAD while writing huge customization projects in the 80's. I loved the language so much I dove into it full force and enjoyed it thoroughly. To me it was so inherently elegant I wanted to use it everywhere. Obviously however making a living meant most of us had to focus our energies elsewhere but something like this makes me all giddy again. I think I have some playing to do!
  • by monk ( 1958 ) on Monday May 25, 2009 @02:17PM (#28085893) Homepage

    You aren't looking very closely if you're missing it.

    By "c-like" I believe Graham meant elements of syntax and approach.

    methods declarations in Java take the form:
    return_value name(args)
    {
        statement;
        data_structure.member = assignment;
    }

    A C function looks like:
    return_value name(args)
    {
        statement;
        data_structure.member = assignment;
    }

    The approach stuff is harder to summarize in a post but think of the differences in the use of macros and differences in binding as good examples

    James Gosling is one of the people who called Java a "C-like language that avoided the pitfalls of C++"

    (full disclosure: I used to work for Sun as a Senior Java Architect so my opinion may colored by the chip they put in my brain)

  • First, his blog is standing up to a slashdotting. That's impressive.

    Not unless you're used to desparately overburdened shared hosting. My six dollar a month account from HostMonster has handled multiple simultaneous slashdottings with concommitant reddit and digg traffic several times. One of my customers sustained roughly seven megabits of traffic for several days straight inside a VM with no problems.

    Slashdot traffic taking a site down means the site isn't hosted at a reputable host, these days.

    If your skills were more common we'd have a better world.

    As much as Lisp people want to say that Lisp lost because of the price of Lisp machines and Lisp compilers, it actually lost because it isn't a particularly practical language; that's why it hasn't had a resurgance while all these people move to haskell, erlang, clojure, et cetera.

    Lisp is a beautiful language. So is Smalltalk. Neither one of them were ever ready to compete with practical languages.

    It's very tough to find someone who can code well in C

    Er, no, it isn't. You just have to know where to look, and to not get stuck in the Silicon Valley highschool mindset, where nerf guns are believed to adequately substitute for health care, and where nobody can name a formal method.

    C programmers are the most numerous professional programmers on Earth today, and we're in the highest unemployment for programmers since the dot com bust, with a number of well meaning companies blindly ditching C for whatever the new hotness is (and eventually going right back). Hell, I get C/C++ programmers for things that aren't looking for C work, because they (rightfully) believe they can pick up the other language as they go and do a better job than the natives due to their understanding of actual costs.

    If you can't find someone who writes good C, either there's something wrong with how you're attracting staff, or you're not judging them skillfully, or they have some reason to stay away. I'm putting my chip on #3.

  • by Gorobei ( 127755 ) on Monday May 25, 2009 @03:02PM (#28086427)

    Actually, you can be faster than C in many cases. C must generate suboptimal code in certain cases because it cannot protect against edge cases like pointer aliasing.

    I've seen a LISP compiler generate better loop code in some cases, simply because it can prove arrays are non-overlapping, or that X*X is provably positive.

  • by MrMr ( 219533 ) on Monday May 25, 2009 @03:04PM (#28086443)
    Funny, I always say the same about fortran. Here's a toy test program for stuff I often need. I would be impressed when C beats this.

          program co
          implicit none
          double precision mpi
          parameter (mpi=3.141592653589793238462d0/1.024d3)
          double complex r(10240)
          integer i,j
          do j=10,110
             do i=-5119,5120
                r(i+5120)=sqrt(mpi*i*j)
             end do
             write(j) r
          end do
          end
  • by Anonymous Coward on Monday May 25, 2009 @03:10PM (#28086507)

    Your 'challenge' was already issued. From Git: "It is faster than all(?) other web application frameworks for serving small dynamic webpages. Please let me know if you have a case where another framework is faster!"

    Put up or shut up. No one's interested in trivial examples. That's what the great language shootout on Alioth is for, where C slacks more and more every year. Or from http://scienceblogs.com/goodmath/2006/11/the_c_is_efficient_language_fa.php [scienceblogs.com] :

    "I didn't know what language to use for this project, so I decided to do an experiment. I wrote the LCS algorithm in a bunch of different languages, to compare how complex the code was, and how fast it ran. I wrote the comp bio algorithm in C, C++, OCaml, Java, and Python, and recorded the results. What I got timing-wise for running the programs on arrays of 2000 elements each was:

            * C: 0.8 seconds.
            * C++: 2.3 seconds.
            * OCaml: 0.6 seconds interpreted, 0.3 seconds fully compiled.
            * Java: 1 minute 20 seconds.
            * Python: over 5 minutes.

    About a year later, testing a new JIT for Java, the Java time was down to 0.7 seconds to run the code, plus about 1 second for the JVM to start up. (The startup times for C, C++, and Ocaml weren't really measurable - they were smaller than the margin of error for the measurements.)"

  • by Anonymous Coward on Monday May 25, 2009 @03:22PM (#28086655)

    how about beating Haskell in this benchmark: http://shootout.alioth.debian.org/u64q/benchmark.php?test=threadring&lang=all

  • by Shados ( 741919 ) on Monday May 25, 2009 @03:24PM (#28086683)

    The main difference is that C is, and always will be, optimized at compile time. Virtual machine languages can dynamically optimize themselves at runtime. Some of the later iteration of the Java and .NET runtimes can notice patterns at runtime (which is an initial performance hit, obviously), and then make assumptions about further calls, and just making sure that they're not messing up (my understanding is that lately Java has made leaps and bounds in that direction).

    Then when the pattern "breaks", it reoptimize the piece of code without the assumption. Depending on the system, that can make tremendous performance improvements. As long as things are optimized at compile time only, you won't be able to go that far. Other examples include system wide memory compacting, doing away with useless locks at runtime, etc.

  • That's a myth. (Score:4, Interesting)

    by Estanislao Martínez ( 203477 ) on Monday May 25, 2009 @04:17PM (#28087189) Homepage

    Reason being is that C is the closest high level language to how a processor actually operates.

    Once you get things like branch prediction, speculative execution and pipelining into the picture, no, C isn't really any closer to how the processors operate. Making efficient use of a modern CPU involves detail at a much, much lower level than C exposes.

    The performance killer for high-level languages isn't really the abstraction away from the machine instruction set; it's garbage collection. And even then, it's mostly because GC tends not to play well with memory caches and virtual memory; a simple stop-and-copy garbage collector is actually algorithmically more efficient than malloc/free, but absolutely atrocious with caches and VM.

  • by ratboy666 ( 104074 ) <<moc.liamtoh> <ta> <legiew_derf>> on Monday May 25, 2009 @05:24PM (#28087817) Journal

    Repost - lt should be replaced by lessthan sign...

    Trolling sure sounds easy, but...

    Gambit-C Scheme vs. C

    I'll make it easy for you. It's the two minute litmus test. Even easier -- I'll give you the pseudo-C code:
    Task: compute n! for n >= 1000.

    In Scheme (Gambit 4.2.8, using infix):

    int factorial(int n) {
    if (n lt= 0) {
    1;
    } else {
    n * factorial(n - 1);
    }
    }

    compile with: gsc f.six
    and run it:

    gsi
    Gambit v4.2.8

    > (load "f")
    "/home/user/f.o1"
    >(factorial 1000)
    4023...0000

    Your challenge? Write a C version in two minutes, tested and compiled. Now, as the final icing, run the C version on smaller numbers, and compare the performance -- did you forget to compile in small integer versions? (try factorial(12) a million times).

    I'll wait (another two minutes). Compare the performance against the LISP version. Did you have to write two versions -- one for big integers and one for small integers? That is pretty well the only way to keep a speed advantage... I hope you wrote it that way. Did you remember to put in 32/64 bit conditionals to retain your advantage on a 64 bit platform?

    I think your C code now looks like this (it should):

    #define FACT_LIMIT 12 -- for 32 bit int type, I don't know what the cutoff is for 64 bit.
    #include bignum.h -- I don't want to bother with quoting assume angle brackets /* This only gets executed a maximum of FACT_LIMIT times; leave it recursive */
    int fact_integer(int n) {
    if (n lt= 0) {
    return 1;
    } else {
    return n * factorial(n - 1);
    }
    } /* May wish to rewrite to an iterative form */
    bignum factorial(bignum n) {
    if (compare_lt(n, FACT_LIMIT)) {
    return int_to_bignum(fact_integer(bignum_to_int(n)));
    }
    return bignum_mult(n, bignum_dec(n));
    }

    You choose the bignum package to use. Or, for more fun, write it yourself. If you wrote it yourself, you remembered to switch to FFT style multiplication at bigger sizes? Or Karatsuba?

    Now, we have only coded to a recursive form, but, since bigints are not first-class in C, we don't know about memory reclamation (leakage). I hope you know the gmp library, or can roll up a gee-whiz allocator on your own. The gmp library would be cheating, by the way -- YOU DID CLAIM YOUR IMPLEMENTATION IN C.

    If recursion is viewed as a problem, the Gambit-C version can be recoded as:

    int factorial(int n) {
    int i;
    int a;
    if (n lt= 0) {
    1;
    } else {
    a = 1;
    for (i = 1; i lt= n; ++i) {
    a *= i;
    }
    a;
    }
    }

    I am sure that something equivalent can be done in the C version. But the normal flow of control stuff doesn't know about bignums. We COULD make the incoming parameter an int, I guess... which works for factorial() but may not be as workable for other functions.

    Answers:
    - gmp does better than Gambit-C on bigint multiply, using FFTs.
    - breaking the result into two separate functions is needed for C to come ahead.
    - yes, C is faster, at the expense of a lot more programming.
    - if I want to, I can simply drop C code into my Gambit-C program on an as-needed basis. The Gambit-C code still looks a
    whole lot cleaner than the C version, and ties it for small integer performance. The bigint performance is still a "win" for
    gmp, but I can use THAT package directly as well in Gambit-C.

    Win:
    - Gambit-C. The prototype was finished to spec in two minutes. Optim

  • Re:He's also right (Score:5, Interesting)

    by amn108 ( 1231606 ) on Monday May 25, 2009 @05:25PM (#28087825)

    I'll start with the good things. First of all, I like your style of writing - clear, precise and on point (of your choosing). Second, you explain quite well on the scenery here.

    Now, to the bad things. I can almost bet you either are not a day-to-day programmer, as opposed to casually writing simple bits of code in C perhaps, or you just do not know either a lot of computing history or latest developments in compilers and technologies in general. Maybe you write niche software and are not interested in these developments, I do not know, but I think it is a bit odd you give such a good and knowledgeable read, yet completely (in my humble opinion) miss the facts overall.

    Machines are different too. There is RISC, there is ZISC, there is VLIW and the CISC/RISC hybrid that modern CPUs mostly are. These days we are also starting to think how we can utilize vector processors, which to gamers are quite familiar as their video cards. Everyone has one, either they know it or not, nowadays they install a 500 mFlops graphics card in PCs in use by hotel receptionists.

    So, C was designed to go close to the metal yes, but since metal is different, C may shoot or miss depending on the architecture too.

    What is far more important, given that today we still use mostly the same instruction set we used when C was invented, is the fact that you are absolutely mistaken if you think high-level languages will not approach C. You overestimate hand-optimization and underestimate modern compilers. It is illogical to assume that a person IN FRONT of the computer terminal will know and benefit from knowing how a program of his writing may be optimized. It is the computer itself, that, based on sufficiently well developed compiler, has the potential to optimize code. The mere fact that in practice it is not always so, is because the field is immature, but not to worry, rapid developments are made.

    Also, things like static typing, static code analysis and other logical solutions absolutely negate any benefit C may have. Also, I am surprised you compare garbage collecting to C, given how programs developed with C still need occasionally, depending on their domain, allocate objects on the heap, and how most virtual machines allocate values on the stack under the hood, even those with garbage collector.

    Anyways, to cut short here, and perhaps give you a chance to explain and ridicule me :-), I will just say I find your comparison of C to say LISP is grossly oversimplified, and does not work on me. It is in fact programming paradigms that have liberated compiler writers to write increasingly effective compilers. Spend some time reading on theory of computation on Wikipedia for instance, it has given me a whole new look on the state of the art. Bottomline is, teaching computers how to translate human typed grammar more efficiently into their program execution machine is getting much cheaper and much more fruitful than spending time or energy hand-writing C code, and I am not talking about the "compromise of man hours", I am saying both LISP and C programs being equally 'good', they can be equally fast, especially depending on the LISP compiler.

    Thank you for your attention, I know how precious it is here on Internetlands.

  • Re:He's also right (Score:4, Interesting)

    by setagllib ( 753300 ) on Monday May 25, 2009 @05:36PM (#28087913)

    C also lacks several important features for optimization, such as static typing,

    Surely you jest. C has weak static typing, but it's static typing all the same, and any " + " you see in C code becomes a specific instruction once compiled. Just because that + could be for pointers, doubles or ints doesn't mean it's not static once read in context.

    The weakness comes from standard C accepting almost any implicit conversion and cast, which is trivially changed to somewhat strong (but not runtime-enforced) typing by using compiler warnings and errors.

    or general reasoning about memory and parallelism.

    Parallelism remains fastest in C, especially in OS kernels where the cost of synchronization primitives is close to a bare minimum. If you have a modern compiler that can distinguish vectorisation from its own ass, you'll get healthy use of parallel code pipelines too.

    The CPU executes instructions, oftentimes in parallel pipelines, using an instruction cache and branch prediction - none of which are modelled in the C language.

    None of which has to be. If you need that kind of performance, you have two options, both with free software:

    a) Embed simple non-standard statements to communicate your branch prediction beliefs

    b) Use profile-guided optimisation to automatically sample real branching statistics, and recompile based on those

    Either way you end up with superior branch prediction performance. Certainly far far superior to what you'd get with LISP or Python.

    If you knew even half as about language implementations as you claim to, you'd know that the C language holds it's speed crown simply because it has attracted an _enormous_ amount of research into optimizing compilers, largely because the way C works _isn't_ the way the CPU works.

    Ok, so what non-assembly language do you propose that does work the way a CPU works? C is the closest we have, and with modern compilers it's way faster than any other usable language. The effort of writing C is far lower than that of writing assembly, and you generally get better performance unless you know specific SIMD/MIMD instructions to replace a loop or two.

  • Re:He's also right (Score:2, Interesting)

    by Anonymous Coward on Monday May 25, 2009 @05:42PM (#28087965)

    Surely you jest. C has weak static typing, but it's static typing all the same, and any " + " you see in C code becomes a specific instruction once compiled. Just because that + could be for pointers, doubles or ints doesn't mean it's not static once read in context.

    That's great. You're still missing all the nice things that make a static type system nice. There are functional languages with Turing complete static type systems. Indeed, the nice thing about that is that every static type deduction is equivalent to a constructive proof quantifying over type elements. Can C even define a type as a first-class object? Maybe with macros, but the average C programmer is scared to death of higher order quantification. They seem to want to tell the computer exactly what to do, step by step, instead of letting it figure it out itself at compile time.

    Heck, even C++'s is Turing complete, via templates.

  • Re:He's also right (Score:4, Interesting)

    by shutdown -p now ( 807394 ) on Monday May 25, 2009 @06:51PM (#28088569) Journal

    So I have to agree with the grandparent. If the LISP heads think LISP is faster than C, they are kidding themselves. I'm not saying a good LISP program can't be faster than a bad C program, but if you have equal skill in optimization, sorry C will win out because in the end it will generate more efficient machine code and that's all that matters. All the theory of different programming paradigms in the world isn't relevant to how the CPU is actually going to do things.

    It's true that any given piece of code can be written to perform faster (or at least not any slower) in C then in Lisp, Java, C#, or whatever your favorite high-level programming language is.
    However, this doesn't mean that applications written in idiomatic, well-written C are necessarily faster than some-other-language. Why? Well, here's a very simple example.

    Let's say you need to sort some stuff in an array. In C, you can hand-code a quicksort or a merge sort inline that will blow anything else out of the water... if you're only doing it once. But you're probably not. So you refactor it to a function to reduce code duplication. Good, but now you need to sort different types, and with different comparison logic - so you add a function pointer for a comparison function. In other words, you get the stock qsort:

    void qsort(void* array, size_t elements_count, size_t element_size, int (*comparer)(const void*, const void*));

    And at that point you're already slower than C++'s std::sort, because:

    1) qsort takes a function pointer and do indirect calls through that, while std::sort will take a function object and inline all calls (in theory a C compiler can inline calls via function pointer as well, but I've yet to see one that does that).

    2) The qsort comparer argument always takes its arguments by reference, even when it's some type that's more efficiently passed by value and in a register (e.g. int). std::sort function object doesn't have this limitation - it can take arguments either by value or by reference.

    The real problem here is the lack of genericity. If you hand-code the sort for every specific case (type + comparison function), you'll be better off in C, but then you'll get tons of duplicate code, which is bad for maintainability. And C doesn't offer any decent ways for compile-time code generification (only macros, but they are so limited and generally meh), so most people just use the more-generic-but-slower solution and don't bother. And - end up slower than C++.

    It should also be noted that the above C vs C++ comparison isn't limited to C++. For example, a direct std::sort analog can be written in C#, fully generic, and all arguments will apply to it as well (JIT will inline the comparer call, and so on).

  • That you believe several books over the course of six years constitutes a resurgence, especially given the historic nature of the language, kind of goes a pretty long way towards proving my point about its nearly non-extant market share.

    Don't get me wrong, I think LISP is a wonderful language. But, let's not do ourselves the disservice, please, of pretending that it's been a major player since the 1960s. If you look at the list of supposedly dead languages that majorly outpace LISP in real world usage measured either as new code or maintained code (eg Delphi, Clipper, Fortran, Cobol, PL/I, Ada, Forth, ANSI Pascal, Object Pascal, ColdFusion, pre-.NET ASP, all on both metrics) you get a clearer idea of where things actually stand.

    If LISP is so amazing, and if LISP has first mover advantage over anything the average programmer has ever heard of, why is it so resoundingly a bit player?

    There are downsides to LISP. Lots of them. Serious ones. It hasn't stayed this dead for 60 years because it's the tragic forgotten child of programming; every freshman who wants to sound educated thumps it at their first opportunity, frequently without ever having written a line (which is not to call you a freshman, just to point out how not-unknown it is.)

    It's a little like SICP. If it's been that free, that well known and that easily accessable for 20 years, how come it's being discarded by the university that published it for curriculum, and how come its design principles are largely unseen even in the work of people who have read it?

    There's a lot to be said for academic languages and academic exercises; they open our eyes to many new approaches to problems.

    But don't kid yourself. They died for a reason. Why is it that all the supposedly awful languages and design strategies are dominant?

    It's because they work. For all their warts, for all their maintenance problems, for all the infrastructure you have to write, they work.

    New practical languages are occurring which adopt many of the lessons of LISP. Ruby got a lot of LISP's problems removed, though it's still got a lot of problems of its own; Haskell can say the same. Erlang's got most of those problems cleared up, and is a practical real world language for a lot of things.

    But dude, if the most impressive thing you can find is the application of graph search to a complex web form with credit card processing that the typical college sophomore could throw together in about a month, I mean, I'm really not sure what to tell you. Orbitz is ridiculously slow for the amount of data it processes, its user interface is awful, it copes poorly with unexpected things like uncommon use of the browser back button, and I usually have to go to it first so that I can check everywhere else and then by the time I'm done everywhere else maybe Orbitz has finally finished its first search.

    What Orbitz does that's impressive is their ability to negotiate ticket prices. I go there because they get the bottom dollar bid. If that's your idea of something you can hold up to show the success of LISP, I've got to ask you: why have you gotten down to rare occasional me-too projects as your shining beacon?

    Yahoo! Stores was lisp too. (Note the past tense.)

    Big whoop.

    When it gets down to it, you should actually try writing something like that some time in LISP. Then try writing it in another language. It's not really all that different. It'll be maybe the dollar sign instead of the parentheses whose ink wears off on your keyboard, and the whatever other language you write will probably be somewhat bulkier (though if you're working in a language like Erlang, Haskell, Mozart-Oz or Forth, it'll be substantially shorter).

    Meh. Ten extra letters to get a three line algorithm done. Trade that for real exceptions and a strong type system, and you've chosen C++. Trade that for the pi calculus (which is hella more expressive than the lambda, and typically completely foreign to the LISPers who preach syntax superiori

  • by Xiroth ( 917768 ) on Monday May 25, 2009 @07:16PM (#28088787)

    Finally, his code seems typical of what I've seen from good LISP programmers -- including even at times myself. Poor documentation. The code is simple, elegant, and should "speak for itself". Well it doesn't. Not to someone trying to maintain it.

    C programmers -- perhaps because of the nature of the language -- seem less prone to this particular trap, though still bad.

    Most likely because it's much easier to verbalise what a small segment of C is doing compared to a small segment of LISP. When writing C, I usually have a mental running commentary of what each line of code is doing. When writing LISP, I found that thinking about what it was doing in English was only stuffing me up, and I really had to let go of that kind of 'verbal thought' and think quite differently - in some ways more mathematically, but in some ways unique to functional programming. All this does make it a little more difficult to write comments for LISP, since 'shifting gears' to write in plain English is a much more difficult leap.

  • by shutdown -p now ( 807394 ) on Monday May 25, 2009 @07:37PM (#28088949) Journal

    That's called "lazy evaluation", and it is a language feature. It's the C program's fault for unnecessarily computing values it is never going to use, instead of computing them when demanded.

    I know what's it called (in case you missed it, I called it that way in my original post), and I know it's a feature. But if you're measuring sort performance between two different languages, you have to make sure that either one actually, you know, performs the sort. It's good that Haskell compiler is smart enough to figure out the result wasn't used in the test, but it doesn't help you any to determine what the performance will be IRL when you actually do use the sort result.

  • by Gorobei ( 127755 ) on Monday May 25, 2009 @08:10PM (#28089283)

    These certainly help, but are often hard-to-use in large programs: a low-level routine may declare it has restricted pointers, but it has no way to enforce that callers follow the rule. So, in big multi-developer systems, you tend to wind up with restricted pointer code kept in internal library functions, not exported functions, and the vast bulk of the app compiled defensively with full aliasing protection. Either that, or the app fails every other Wednesday for some strange reason.

  • by m50d ( 797211 ) on Monday May 25, 2009 @08:21PM (#28089365) Homepage Journal
    It is very far off. I'm not sure what criteria you're using to determine what's "not very far off", but if it's first-class functions, then most modern mainstream languages (with notable exceptions of C++ and Java) aren't "far off" from Lisp. But I would say that it's a wrong definition.

    First-class functions, lambda, map and friends, generator expressions, and so on. My criterion is what it feels like to write, and in that sense Python is very close. (I'd go so far as to say it's better, but that's going to be contentious).

    What really sets Lisp apart is how the program itself is defined in terms of structures that are fundamental to the language, and how those structures can be easily manipulated in the language itself. Simply put, Lisp - especially Common Lisp (though R6RS is neat, too) - is a pinnacle of metaprogramming so far, and that's what is its defining feature.

    Lisp fans always claim this, but I think it's a red herring. TCL takes the same principle even further, and it's nowhere near as popular or admired. Metaprogramming isn't what makes lisp good to program in, it's all in the first-class functions and functional programming flow control tools.

  • by shutdown -p now ( 807394 ) on Monday May 25, 2009 @08:29PM (#28089437) Journal

    First-class functions, lambda, map and friends, generator expressions, and so on.

    By those, virtually any functional language is Lispish - SML/OCaml/F#, Haskell, whatever.

    Also, C# 3.0+ would be Lispish (it has everything that you've listed), as well as Scala.

    Lisp fans always claim this, but I think it's a red herring. TCL takes the same principle even further, and it's nowhere near as popular or admired.

    What does it have to do with being "popular" or "admired"? It is, in general, pretty widely acknowledged in the PL community that Tcl is indeed one of the languages close to the spirit of Lisp, though not its syntax.

  • Re:He's also right (Score:2, Interesting)

    by fnc ( 666371 ) on Tuesday May 26, 2009 @01:32AM (#28091535)
    For what I known the fastest high level language is Fortran. Besides being the first language high level language and obviously have lots of work of compiler optimization done, it has a very restrictive type system that does not have pointers (or have them in a very restrictive way), so the compiler can do optimizations in arrays that would be unsafe in a language like C.
  • Re:That's a myth. (Score:3, Interesting)

    by Zoxed ( 676559 ) on Tuesday May 26, 2009 @03:19AM (#28092045) Homepage

    > The difference is that I can sit down and simply enter near algorithms of matrix math into Fortran, and the optimizer will go to town and give me near perfect code,

    Ahhh: a breath of fresh air. As a programmer this is to me exactly how it should work. Not my language is better than your language but the programmer can get on with describing the solution, and leave the compiler to do the boring work !!

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...