Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software The Internet

World's "Fastest" Small Web Server Released, Based On LISP 502

Cougem writes "John Fremlin has released what he believes to be the worlds fastest webserver for small dynamic content, teepeedee2. It is written entirely in LISP, the world's second oldest high-level programming language. He gave a talk at the Tokyo Linux Users Group last year, with benchmarks, which he says demonstrate that 'functional programming languages can beat C.' Imagine a small alternative to Ruby on rails, supporting the development of any web application, but much faster."
This discussion has been archived. No new comments can be posted.

World's "Fastest" Small Web Server Released, Based On LISP

Comments Filter:
  • In speed and elegance, perhaps. But not on the überprogrammer salary to maintain it.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Dunno about you, buddy, but I find LISP a lot easier to read and write than all the C-like languages (although pure C itself is OK when it sticks to what it's good at - being a set of macros for assembler when programming systems-level stuff).

      • by Holmwood ( 899130 ) on Monday May 25, 2009 @01:11PM (#28085819)

        First, his blog is standing up to a slashdotting. That's impressive.

        Dunno about you, buddy, but I find LISP a lot easier to read and write

        Right, but we're not talking about you. I wish we were. If your skills were more common we'd have a better world.

        Second, I can speak up and I'm not even posting as an ac. It's straightforward to find people who can "program" in a language of their choice. It's tougher to find people who can program well in a language of their choice. It's tougher yet to find people who can program well in a language of your choice. It's very tough to find someone who can code well in C and insanely tough to find someone who can code well in LISP.

        It's been my observation -- as someone who has managed to convince many others that he deserves the salary of an "uberprogrammer" -- as I've shifted into running large engineering teams, that perhaps one in twenty programmers can code acceptably in C and perhaps one in two hundred in LISP.

        Third, I'd note there are behaviours of his software that surprised and annoyed some readers -- e.g. column treatment. I'd argue that these are generally buried deeper in LISP code than in C, but that's something we could heartily debate.

        Finally, his code seems typical of what I've seen from good LISP programmers -- including even at times myself. Poor documentation. The code is simple, elegant, and should "speak for itself". Well it doesn't. Not to someone trying to maintain it.

        C programmers -- perhaps because of the nature of the language -- seem less prone to this particular trap, though still bad.

        Regards,
        -Holmwood

        • First, his blog is standing up to a slashdotting. That's impressive.

          Not unless you're used to desparately overburdened shared hosting. My six dollar a month account from HostMonster has handled multiple simultaneous slashdottings with concommitant reddit and digg traffic several times. One of my customers sustained roughly seven megabits of traffic for several days straight inside a VM with no problems.

          Slashdot traffic taking a site down means the site isn't hosted at a reputable host, these days.

          If your skills were more common we'd have a better world.

          As much as Lisp people want to say that Lisp lost because of the price of Lisp machines and Lisp compilers, it actually lost because it isn't a particularly practical language; that's why it hasn't had a resurgance while all these people move to haskell, erlang, clojure, et cetera.

          Lisp is a beautiful language. So is Smalltalk. Neither one of them were ever ready to compete with practical languages.

          It's very tough to find someone who can code well in C

          Er, no, it isn't. You just have to know where to look, and to not get stuck in the Silicon Valley highschool mindset, where nerf guns are believed to adequately substitute for health care, and where nobody can name a formal method.

          C programmers are the most numerous professional programmers on Earth today, and we're in the highest unemployment for programmers since the dot com bust, with a number of well meaning companies blindly ditching C for whatever the new hotness is (and eventually going right back). Hell, I get C/C++ programmers for things that aren't looking for C work, because they (rightfully) believe they can pick up the other language as they go and do a better job than the natives due to their understanding of actual costs.

          If you can't find someone who writes good C, either there's something wrong with how you're attracting staff, or you're not judging them skillfully, or they have some reason to stay away. I'm putting my chip on #3.

          • Re: (Score:3, Informative)

            I dunno. This is what I saw when I tried to read it:

            "Service Temporarily Unavailable

            The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
            Apache/2.2.4 (Ubuntu) mod_fastcgi/2.4.2 PHP/5.2.3-1ubuntu6.5 Server at john.freml.in Port 80"

        • Re: (Score:3, Interesting)

          by Xiroth ( 917768 )

          Finally, his code seems typical of what I've seen from good LISP programmers -- including even at times myself. Poor documentation. The code is simple, elegant, and should "speak for itself". Well it doesn't. Not to someone trying to maintain it.

          C programmers -- perhaps because of the nature of the language -- seem less prone to this particular trap, though still bad.

          Most likely because it's much easier to verbalise what a small segment of C is doing compared to a small segment of LISP. When writing C, I us

    • No, definitely not in speed.

      He wrote a LISP-based memory-only webserver that could respond to requests roughly 10% faster than lighttpd with php. I promise you, if I wrote a C implementation that performed only the functionality he implemented, it would blow it out of the water. In fact, before anyone else comes out with the "X is faster than C!" claim, I'll leave the challenge out there:

      I will prove that anything written in a higher-level language will not be as fast as my implementation of it in C. I leave this challenge out to anyone to take. (*)

      Seriously, I'm sick of this crap. Bring it on.

      (*) Caveat: It must be a small challenge involving a relatively simple task. I don't have a lot of time to waste on this.

      • by vivaoporto ( 1064484 ) on Monday May 25, 2009 @01:05PM (#28085737)
        Parent post is inflammatory but not troll. He has a point, this implementation is a minimal test case built in order to prove a point. A skilled C programmer could implement the same test case that would perform better than the LISP one, if the task was worthy.
        • He's also right (Score:5, Insightful)

          by Sycraft-fu ( 314770 ) on Monday May 25, 2009 @02:12PM (#28086525)

          Reason being is that C is the closest high level language to how a processor actually operates. A lot of people get confused, or perhaps never really know how a CPU actually works and that no matter what language you code in, it all gets translated in to machine language in the end.

          Now what that means is that there are certain things that, while convenient for humans, have nothing to do with how the processor actually thinks. A good example would be newer replacements for pointers. A lot of people hate pointers, they claim, correctly, that pointers are confusing, and that you can easily cause problems with them. That is all true, however it is also how the CPU actually thinks. The CPU doesn't have advanced concepts of references and of garbage collection and so on. The CPU has data in memory, and pointers to the location of that data. It doesn't even have data types. The data is just binary data. You can store a string, and then run calculations on it using the FPU. Granted you'll get garbage as a result, but there's nothing stopping you from doing it, the CPU has no idea what your data is.

          So, the upshot of this is that C is very close to the bare metal of how the system works. Thus if you are good with it, you can produce extremely efficient code. The higher level stuff may be nice, but it all slows things down. When you have a managed language that takes care of all the nasty stuff for you, well it is spending CPU cycles doing that. Now the tradeoff is quite often worth it, since CPUs have loads of power and maintainability of code is important, but don't trick yourself in to thinking it is more efficient.

          You have to remember that no matter what, there is one and only one way that the machine actually thinks, one way it actually processes information. All the nifty high level programming shit is just to make life easier for the programmers. That's wonderful, but it doesn't give you the most optimized code. Of the high level languages, C retains that crown, and likely always will, because it is the closest to how a CPU actually works. I've seen the joke that "C is a language with all the speed of assembly and all the ease of use of assembly!" There's some truth to that.

          So I have to agree with the grandparent. If the LISP heads think LISP is faster than C, they are kidding themselves. I'm not saying a good LISP program can't be faster than a bad C program, but if you have equal skill in optimization, sorry C will win out because in the end it will generate more efficient machine code and that's all that matters. All the theory of different programming paradigms in the world isn't relevant to how the CPU is actually going to do things.

          • "... C is the closest high level language to how a processor actually operates."

            C was meant to be equivalent to a high-level assembly language. Someone told me that he was worried about coding in assembly language for a particular part of a video driver, but he discovered that, in that case, his C compiler wrote perfect assembly language.
            • Re: (Score:3, Insightful)

              by Jurily ( 900488 )

              C was meant to be equivalent to a high-level assembly language.

              No, it was a general-purpose high-level language. It's just that our definition of "high-level" has changed over the past 40 years. Now we have Python and Ruby.

          • Re:He's also right (Score:4, Informative)

            by Anonymous Coward on Monday May 25, 2009 @03:16PM (#28087169)

            C is not how a modern processor thinks, with super-scalar instruction issue, cache and pre-fetch memory controls, and speculative branch prediction. In the end, even the C community splits into camps that let the optimizer do everything, versus embed some hand-written assembly or equivalent "machine intrinsics" routines in the middle of their normal C code. In both cases, non-trivial amounts of profiling and reverse-engineering are often needed to coax an appropriate machine code stream into existence, and this machine code is decidedly not how the developers usually think.

            The choice of language is not so significant really. You can find Lisp dialects that efficiently use native machine types and have little runtime cost due to having weak type systems (just like C) where casting is easy and the responsibility for crazy results lives with the programmer and the limited ability of the compiler to check some static cases. These dialects will run imperative code quite well, e.g. if you transliterated typical C procedures into equivalent Lisp procedures, you'd get similar performance. Ironically, these systems aren't as fast when you write very high-level or functional Lisp, because those sorts of programs rely on a more elaborate optimization and runtime support layer, e.g. to optimize away recursive function call overheads or frequent allocation and destruction of temporary data like lists. This kind of code also doesn't work well in C, so the programmer has to perform these optimizations at the source level, by writing loops instead of recursion and making use of stack variables and static data structures instead of making many malloc/free calls in inner-loops, etc.

            The main difference is the presumed runtime system for the language, the compilation goals, and the core language libraries. This includes things like whether you have garbage collection or explicit memory management, how you compile (whole program versus treating every function/procedure as an ABI symbol), high-level IO abstractions or low-level register (or memory-mapped) IO and interrupt events, etc.

            If you're interested in this stuff, you might learn something from reading about PreScheme, which was a lisp dialect designed to allow the runtime system for a full Scheme (lisp dialect) to be written in a more limited Scheme-like language. This is much like the core bits of an OS kernel like Linux are written in carefully controlled subsets of C that do not presume the existence of an underlying runtime environment nor the standard C library.

            In reality, many of the compiler and runtime techniques applied to a simple language like lisp could be applied to a C implementation as well. It's really a cultural rather than technical issue which prevents there being C environments that skip the traditional, naive compile and link strategy used in POSIX and UNIX ABIs.

          • That's a myth. (Score:4, Interesting)

            by Estanislao Martínez ( 203477 ) on Monday May 25, 2009 @03:17PM (#28087189) Homepage

            Reason being is that C is the closest high level language to how a processor actually operates.

            Once you get things like branch prediction, speculative execution and pipelining into the picture, no, C isn't really any closer to how the processors operate. Making efficient use of a modern CPU involves detail at a much, much lower level than C exposes.

            The performance killer for high-level languages isn't really the abstraction away from the machine instruction set; it's garbage collection. And even then, it's mostly because GC tends not to play well with memory caches and virtual memory; a simple stop-and-copy garbage collector is actually algorithmically more efficient than malloc/free, but absolutely atrocious with caches and VM.

            • Re:That's a myth. (Score:5, Insightful)

              by TeknoHog ( 164938 ) on Monday May 25, 2009 @03:50PM (#28087487) Homepage Journal

              Once you get things like branch prediction, speculative execution and pipelining into the picture, no, C isn't really any closer to how the processors operate. Making efficient use of a modern CPU involves detail at a much, much lower level than C exposes.

              The problem at that level is that you'll be seriously bound to a specific architecture. Even C, which is often called a portable assembler, is designed after a certain kind of assembler.

              The somewhat surprising result is that you can also improve performance (compared to plain C) with a higher level language. You need a higher-level perspective to tackle things like vector/parallel processing efficiently.

              • Re:That's a myth. (Score:5, Insightful)

                by hawk ( 1151 ) <hawk@eyry.org> on Monday May 25, 2009 @06:38PM (#28088953) Journal

                It's about using the right tool for the job.

                Some years, I do heavy computational programming. No so much number crunching, as bashing them into submission.

                I can do this *much* faster in a modern Fortran (90 or later) than in C. Not because C can't do the same things, and not because an optimized C can't get to the same results.

                The difference is that I can sit down and simply enter near algorithms of matrix math into Fortran, and the optimizer will go to town and give me near perfect code, which will be faster than what I would get with C (which would also take significantly longer to code). A skilled C coder could do a better job, and ultimately get roughly the same performance as I got from Fortran.

                This isn't because "Fortran is better than C," but because Fortran is designed and optimized by people who do this kind of stuff *to* do this kind of stuff. It can make very strong assumptions that C can't (and much of the hand-optimizing of C is essentially replicating these assumptions).

                OTHO, writing a modern operating system in Fortran *could* be done, but it would be painful, would take far longer to code, and would probably have atrocious performance.

                note: I believe that Pr1mos was largely written in FORTRAN IV.

                hawk

                • Re: (Score:3, Interesting)

                  by Zoxed ( 676559 )

                  > The difference is that I can sit down and simply enter near algorithms of matrix math into Fortran, and the optimizer will go to town and give me near perfect code,

                  Ahhh: a breath of fresh air. As a programmer this is to me exactly how it should work. Not my language is better than your language but the programmer can get on with describing the solution, and leave the compiler to do the boring work !!

          • Re:He's also right (Score:5, Informative)

            by The_Wilschon ( 782534 ) on Monday May 25, 2009 @03:25PM (#28087249) Homepage
            You forget about compilers. LISP gets compiled (by most implementations), too. All the "nifty high level programming shit" can, and sometimes does, if you have a good enough compiler, get compiled away. Furthermore, the "nifty high level programming shit" provides a whole lot more information to the compiler, allowing it to do much more aggressive optimizations because it can prove that they are safe. If somebody comes up with a slick new optimization technique, I don't have to rewrite my LISP code, I just implement it in the compiler. You'd have to go back through every line of C code you've ever written in order to implement it. If somebody gives you a radically different CPU architecture, the C code that is so wonderfully optimized for one CPU will run dog slow. You can reoptimize it for the new arch, but then it will run slow on the old one. With a good LISP compiler, the same code gets optimizations that are appropriate for each arch.

            Check out Stalin, SBCL, and http://www.cs.indiana.edu/~jsobel/c455-c511.updated.txt [indiana.edu]. You might be surprised at what you find.
          • Re: (Score:3, Informative)

            by Skinkie ( 815924 )
            And that is the point LISP guy wants to make iff I have a LISP compiler that in general optimises better than the coding structure a C programmer takes, it will be faster because you could heavily optimise the LISP compiler. In the same ballpark are Haskell (and maybe in the future Python) iff their compilers generate better structures because the task is better formally defined. It could generate the optimal structure for the problem. Maybe more optimal than a human would design it. Today: no.
          • Re:He's also right (Score:5, Insightful)

            by Sectrish ( 949413 ) on Monday May 25, 2009 @03:37PM (#28087365) Homepage
            I fully agree with your post (I prefer C over most other languages myself for some weird reason, but if it n eeds to be made *right now*, I'll use Python/Perl/Bash/...).

            However, there is an addendum I'd like to make here:

            Some programming languages force you to think in ways that are actually beneficial to the speed of your code, and can outpace the "normal" C program significantly.

            For example, a few months ago I was forced to write something concerning an AI algorithm in Prolog. Now, I was cursing and cursing at it, because the constraint solver built into the prolog compiler was so goddamn restrictive, but that's how constraint solving works. Every time I was thinking to myself: "if I'd have been allowed to build this in C, it would be done yesterday, and probably be faster to boot!"

            But when I ended up finishing it and running it, it was blindingly fast, and I queried my professor about it. He told me that another student some time ago was thinking the same thing as me, and actually made a C variant. It ran 4x as slow as the prolog equivalent even after spending quite some time optimizing (interchanging algorithms and even looking at the assembler code, he told me).

            Then he told me what was causing this discrepancy, as I had always thought that C was the end all be all of performance. It was the restrictive nature of the prolog solver that caused me to put more brain power into the problem, and as such shift work from the computer to the human. Because those same restrictions allowed lots and lots of optimisations (aliasing comes to mind).
          • Re:He's also right (Score:5, Interesting)

            by amn108 ( 1231606 ) on Monday May 25, 2009 @04:25PM (#28087825)

            I'll start with the good things. First of all, I like your style of writing - clear, precise and on point (of your choosing). Second, you explain quite well on the scenery here.

            Now, to the bad things. I can almost bet you either are not a day-to-day programmer, as opposed to casually writing simple bits of code in C perhaps, or you just do not know either a lot of computing history or latest developments in compilers and technologies in general. Maybe you write niche software and are not interested in these developments, I do not know, but I think it is a bit odd you give such a good and knowledgeable read, yet completely (in my humble opinion) miss the facts overall.

            Machines are different too. There is RISC, there is ZISC, there is VLIW and the CISC/RISC hybrid that modern CPUs mostly are. These days we are also starting to think how we can utilize vector processors, which to gamers are quite familiar as their video cards. Everyone has one, either they know it or not, nowadays they install a 500 mFlops graphics card in PCs in use by hotel receptionists.

            So, C was designed to go close to the metal yes, but since metal is different, C may shoot or miss depending on the architecture too.

            What is far more important, given that today we still use mostly the same instruction set we used when C was invented, is the fact that you are absolutely mistaken if you think high-level languages will not approach C. You overestimate hand-optimization and underestimate modern compilers. It is illogical to assume that a person IN FRONT of the computer terminal will know and benefit from knowing how a program of his writing may be optimized. It is the computer itself, that, based on sufficiently well developed compiler, has the potential to optimize code. The mere fact that in practice it is not always so, is because the field is immature, but not to worry, rapid developments are made.

            Also, things like static typing, static code analysis and other logical solutions absolutely negate any benefit C may have. Also, I am surprised you compare garbage collecting to C, given how programs developed with C still need occasionally, depending on their domain, allocate objects on the heap, and how most virtual machines allocate values on the stack under the hood, even those with garbage collector.

            Anyways, to cut short here, and perhaps give you a chance to explain and ridicule me :-), I will just say I find your comparison of C to say LISP is grossly oversimplified, and does not work on me. It is in fact programming paradigms that have liberated compiler writers to write increasingly effective compilers. Spend some time reading on theory of computation on Wikipedia for instance, it has given me a whole new look on the state of the art. Bottomline is, teaching computers how to translate human typed grammar more efficiently into their program execution machine is getting much cheaper and much more fruitful than spending time or energy hand-writing C code, and I am not talking about the "compromise of man hours", I am saying both LISP and C programs being equally 'good', they can be equally fast, especially depending on the LISP compiler.

            Thank you for your attention, I know how precious it is here on Internetlands.

          • Re:He's also right (Score:4, Interesting)

            by shutdown -p now ( 807394 ) on Monday May 25, 2009 @05:51PM (#28088569) Journal

            So I have to agree with the grandparent. If the LISP heads think LISP is faster than C, they are kidding themselves. I'm not saying a good LISP program can't be faster than a bad C program, but if you have equal skill in optimization, sorry C will win out because in the end it will generate more efficient machine code and that's all that matters. All the theory of different programming paradigms in the world isn't relevant to how the CPU is actually going to do things.

            It's true that any given piece of code can be written to perform faster (or at least not any slower) in C then in Lisp, Java, C#, or whatever your favorite high-level programming language is.
            However, this doesn't mean that applications written in idiomatic, well-written C are necessarily faster than some-other-language. Why? Well, here's a very simple example.

            Let's say you need to sort some stuff in an array. In C, you can hand-code a quicksort or a merge sort inline that will blow anything else out of the water... if you're only doing it once. But you're probably not. So you refactor it to a function to reduce code duplication. Good, but now you need to sort different types, and with different comparison logic - so you add a function pointer for a comparison function. In other words, you get the stock qsort:

            void qsort(void* array, size_t elements_count, size_t element_size, int (*comparer)(const void*, const void*));

            And at that point you're already slower than C++'s std::sort, because:

            1) qsort takes a function pointer and do indirect calls through that, while std::sort will take a function object and inline all calls (in theory a C compiler can inline calls via function pointer as well, but I've yet to see one that does that).

            2) The qsort comparer argument always takes its arguments by reference, even when it's some type that's more efficiently passed by value and in a register (e.g. int). std::sort function object doesn't have this limitation - it can take arguments either by value or by reference.

            The real problem here is the lack of genericity. If you hand-code the sort for every specific case (type + comparison function), you'll be better off in C, but then you'll get tons of duplicate code, which is bad for maintainability. And C doesn't offer any decent ways for compile-time code generification (only macros, but they are so limited and generally meh), so most people just use the more-generic-but-slower solution and don't bother. And - end up slower than C++.

            It should also be noted that the above C vs C++ comparison isn't limited to C++. For example, a direct std::sort analog can be written in C#, fully generic, and all arguments will apply to it as well (JIT will inline the comparer call, and so on).

      • Modded Troll? Come on guys, its a legitimate challenge - I'd really love to have someone take me up on it. If anyone out there thinks they can honestly write faster code in some higher level language than I can in C, I want to put it to the test. It'll be fun, and I'll happily admit defeat if it can be thus proven. (And I'll take up the competing programming language.)

        • Why not go ahead and do a demonstration?
        • Re: (Score:3, Insightful)

          by imbaczek ( 690596 )
          you can't be faster than C, because only C has access to complete 100% of special functionality OS kernels provide, like sendfile(). this challenge is moot and everybody knows it; the point is to _not_ be writing in C and achieve speeds which are respectable and/or fast enough.
          • by Gorobei ( 127755 ) on Monday May 25, 2009 @02:02PM (#28086427)

            Actually, you can be faster than C in many cases. C must generate suboptimal code in certain cases because it cannot protect against edge cases like pointer aliasing.

            I've seen a LISP compiler generate better loop code in some cases, simply because it can prove arrays are non-overlapping, or that X*X is provably positive.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        I think you are right that C can take on any higher level programming language in speed. The same could be said with assembly.

        The reason of course we don't write everything in low level programming languages is just the cost of maintaining them (LOC, readability, compatability...). I like what John has done in showing what assumptions should be made in a higher level programming language in order not to compromise a whole lot of speed.

      • by mauddib~ ( 126018 ) on Monday May 25, 2009 @01:12PM (#28085833) Homepage

        Yes, and your caveat is actually the most important element: for projects that need well definable high-level abstractions, or able to operate on mathematically infinite structures, a functional language wins clearly in comparison with C.

        The real question is: allow high profiled lambda abstractions, while keeping space and time complexity as low as an optimized C program.

        Well, just to show you that your challenge is easily met... In Lisp, it is easy to write an assembler, which over time allow the same kind of imperative abstractions as are present in C, thus allowing me to write a program with equal speed as in C.

        Also, when the nature of the input of a high-level programming language changes, it could optimize its data-structures and algorithms to create a better match with that input. Of course, such a thing could also be implemented in C or Pascal, but requires tremediously more effort.

        • Nowhere in my statement did I say that C is therefor the correct language in which to write everything. As projects grow in size and complexity, maintainability is more important than raw speed.

          However, in context to the article, the example was "here is something that everyone does in C, but I did it in LISP and it was faster!" And that is my challenge: one, relatively small app written in any higher language will be faster when written in C. There are no restrictions to the challenge based on maintain

          • by beelsebob ( 529313 ) on Monday May 25, 2009 @01:53PM (#28086329)

            The debian language shootout has a few examples of Functional languages being faster than people's best efforts in C, especially when it comes to parallelisation. I suggest you try and write a regex-dna example that's faster than the Haskell implementation for example.

            Having said that, the point really isn't that it's faster, it's that it's *as fast* - people should be shedding the ideas that functional languages are slow, and unusable, and trying them out in industry, because now the only real risk is that you have dumb coders (entirely possible).

            • Re: (Score:3, Informative)

              The debian language shootout has a few examples of Functional languages being faster than people's best efforts in C, especially when it comes to parallelisation. I suggest you try and write a regex-dna example that's faster than the Haskell implementation for example.

              First of all, you should be careful about using results from the Language Shootout in general, because they often don't know what they're measuring. For example, for quite a while, Haskell scored much higher on the benchmark because the tests were written in such a way that results were computed but then never used; and Haskell compiler is surprisingly good at figuring that out, so it discarded the whole computation part as well. In other words, Haskell tests didn't actually do the same work that C tests di

        • In Lisp, it is easy to write an assembler, which over time allow the same kind of imperative abstractions as are present in C, thus allowing me to write a program with equal speed as in C.

          You can do the same in C, so the only conclusion from your argument is that neither is faster. Which makes the whole challenge useless if we go down that road.

          Of course, such a thing could also be implemented in C or Pascal, but requires tremediously more effort.

          Again this doesn't say anything about the intrinsic performan

      • This is not a troll, and should not be modded as such. The poster has an accessible (spam-protected) email address that I assume is valid. S/He's been posting a long time on slashdot, and his/her assertion is credible and interesting, albeit provocative.

        Moreover, I'm inclined to agree with my own caveat: while one can write C that is more rapid than LISP, one can also more rapidly write LISP than C (for a given problem).

        I also note my earlier statement, that it's easier (though still quite hard) to find a t

      • Wonderful. And after you write it, do not forget to send it to Jeffrey Mark Siskind. He will send you a faster implementation written in Stalin. :o)
      • Probably true, but not necessarily interesting. You can take a high-level language, compile it to LLVM IR, and then use the C backend for LLVM, and get a C program which will be exactly as fast as the high-level language program. This effectively proves that the C program can always be at least as fast as the original program. The difference comes when you try to maintain it. Programs, over time, gradually accumulate changes, often far outside the scope of the original design. In a (good) high-level la

      • by MrMr ( 219533 ) on Monday May 25, 2009 @02:04PM (#28086443)
        Funny, I always say the same about fortran. Here's a toy test program for stuff I often need. I would be impressed when C beats this.

              program co
              implicit none
              double precision mpi
              parameter (mpi=3.141592653589793238462d0/1.024d3)
              double complex r(10240)
              integer i,j
              do j=10,110
                 do i=-5119,5120
                    r(i+5120)=sqrt(mpi*i*j)
                 end do
                 write(j) r
              end do
              end
        • Re: (Score:3, Insightful)

          by Kensai7 ( 1005287 )

          Every language has its own niche and ideal use.

          I like your simple example that shows the merits of the oldest high-level language in what it was designed to do best.

      • Re: (Score:3, Informative)

        by debatem1 ( 1087307 )
        1) The question is about functional programming languages versus imperative programming languages- not high-level versus low-level.

        2) Can we agree on a platform? If I get to name it, its going to be the Xerox 1109, and you're toast.

        3) The computer language shootout [debian.org] has some numbers that don't look so good for C. Maybe you'd care to re-implement the thread-ring test? Cause right now it's taking C 164+ seconds to do it, and 9 on haskell. Same thing on the k-nucleotide test.
      • Re: (Score:3, Insightful)

        Is there a point to this challenge? I will prove that anything written in C will not be as fast as my implementation of it in assembler.
      • Re: (Score:3, Informative)

        by laddiebuck ( 868690 )

        What's more without cache! That is, for every request, the PHP webpage is being recompiled. I hope he doesn't call that a fair comparison, as anyone even remotely interested in high throughput takes ten minutes to install a caching system like xcache or one of five other alternatives. I bet you anything that lighty, fastcgi and xcache would serve 1.5-2 times as many requests per second as his homebrew code.

      • by ratboy666 ( 104074 ) <fred_weigelNO@SPAMhotmail.com> on Monday May 25, 2009 @04:24PM (#28087817) Journal

        Repost - lt should be replaced by lessthan sign...

        Trolling sure sounds easy, but...

        Gambit-C Scheme vs. C

        I'll make it easy for you. It's the two minute litmus test. Even easier -- I'll give you the pseudo-C code:
        Task: compute n! for n >= 1000.

        In Scheme (Gambit 4.2.8, using infix):

        int factorial(int n) {
        if (n lt= 0) {
        1;
        } else {
        n * factorial(n - 1);
        }
        }

        compile with: gsc f.six
        and run it:

        gsi
        Gambit v4.2.8

        > (load "f")
        "/home/user/f.o1"
        >(factorial 1000)
        4023...0000

        Your challenge? Write a C version in two minutes, tested and compiled. Now, as the final icing, run the C version on smaller numbers, and compare the performance -- did you forget to compile in small integer versions? (try factorial(12) a million times).

        I'll wait (another two minutes). Compare the performance against the LISP version. Did you have to write two versions -- one for big integers and one for small integers? That is pretty well the only way to keep a speed advantage... I hope you wrote it that way. Did you remember to put in 32/64 bit conditionals to retain your advantage on a 64 bit platform?

        I think your C code now looks like this (it should):

        #define FACT_LIMIT 12 -- for 32 bit int type, I don't know what the cutoff is for 64 bit.
        #include bignum.h -- I don't want to bother with quoting assume angle brackets /* This only gets executed a maximum of FACT_LIMIT times; leave it recursive */
        int fact_integer(int n) {
        if (n lt= 0) {
        return 1;
        } else {
        return n * factorial(n - 1);
        }
        } /* May wish to rewrite to an iterative form */
        bignum factorial(bignum n) {
        if (compare_lt(n, FACT_LIMIT)) {
        return int_to_bignum(fact_integer(bignum_to_int(n)));
        }
        return bignum_mult(n, bignum_dec(n));
        }

        You choose the bignum package to use. Or, for more fun, write it yourself. If you wrote it yourself, you remembered to switch to FFT style multiplication at bigger sizes? Or Karatsuba?

        Now, we have only coded to a recursive form, but, since bigints are not first-class in C, we don't know about memory reclamation (leakage). I hope you know the gmp library, or can roll up a gee-whiz allocator on your own. The gmp library would be cheating, by the way -- YOU DID CLAIM YOUR IMPLEMENTATION IN C.

        If recursion is viewed as a problem, the Gambit-C version can be recoded as:

        int factorial(int n) {
        int i;
        int a;
        if (n lt= 0) {
        1;
        } else {
        a = 1;
        for (i = 1; i lt= n; ++i) {
        a *= i;
        }
        a;
        }
        }

        I am sure that something equivalent can be done in the C version. But the normal flow of control stuff doesn't know about bignums. We COULD make the incoming parameter an int, I guess... which works for factorial() but may not be as workable for other functions.

        Answers:
        - gmp does better than Gambit-C on bigint multiply, using FFTs.
        - breaking the result into two separate functions is needed for C to come ahead.
        - yes, C is faster, at the expense of a lot more programming.
        - if I want to, I can simply drop C code into my Gambit-C program on an as-needed basis. The Gambit-C code still looks a
        whole lot cleaner than the C version, and ties it for small integer performance. The bigint performance is still a "win" for
        gmp, but I can use THAT package directly as well in Gambit-C.

        Win:
        - Gambit-C. The prototype was finished to spec in two minutes. Optim

        • And now for the grand unveil - the SCHEME (LISP) version, with integrated C code for the "super-speedy" inner loop. Just to illustrate that the SAME optimizations are available for the LISP programmer. This can be further micro-optimized.

          Note that it isn't much different from a "C" implementation that has the same optimization. Note the ease of moving between data types (we only worry about in a very cursory way). I am waiting for your pure C implementation.

          # In "normal" scheme syntax. Note escape to six sy

          • Horse to water... (Score:3, Insightful)

            by ratboy666 ( 104074 )

            We now come to the final post in this series.

            The speed in C comes from the direct hardware "low-level" thing. In the domain of "int" types on a 32 bit machine, I would say that the most efficient (well, one of the most efficient) implementations of factorial() would be:

            #define factorial(n) ((n) 0 ? 1 : fact_table[n])
            int fact_table[] = { 1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 39916800, 479001600 };

            Can you hear the l33t-speak? I codez the C r3al g00d! My factorial function only takes a nanosecond or

    • oblig (Score:5, Funny)

      by olivier69 ( 1176459 ) on Monday May 25, 2009 @01:00PM (#28085689) Homepage

      In speed and elegance, perhaps.

      So you agree to the fact that emacs is faster and more elegant than vi, right ? You agree ?

  • Disgusting (Score:5, Funny)

    by Anonymous Coward on Monday May 25, 2009 @12:22PM (#28085259)

    Imagine a small alternative to Ruby on rails, supporting the development of any web application, but much faster.

    It's disgusting that these LISPers aren't content with their own perversion, but have to try to attract others to the gay lifestyle.

  • An alternative (Score:3, Interesting)

    by Ed Avis ( 5917 ) <ed@membled.com> on Monday May 25, 2009 @12:26PM (#28085305) Homepage

    You might also be interested in SMLserver [smlserver.org] which embeds Standard ML into Apache, and apparently is pretty fast.

  • by Anonymous Coward on Monday May 25, 2009 @12:37PM (#28085431)

    And Java is faster too!

    (rolls eyes)

    Different tools are good for various solving various problems.

    Yeah, I know certain library routines in certain languages are better than others.

    Interpreted languages, in general, are not faster than compiled languages. Period.

    This "faster than C" canard keeps getting trotted out and shot down every time.

    Well, there is one language faster: assembly.

    • by MADnificent ( 982991 ) on Monday May 25, 2009 @12:51PM (#28085593)

      As you hinted at Common Lisp being an interpreted language, a clarification is in place.

      Most Common Lisp implementations are compiled. As it has been for some time now. Some lisp implementation compile the code themselvel (SBCL for instance), others walk around and compile C-code (ecl for instance).

      An overview of free CL implementations can be found at http://www.cliki.net/Common%20Lisp%20implementation [cliki.net] .

    • by PaulBu ( 473180 ) on Monday May 25, 2009 @12:51PM (#28085595) Homepage

      It can be, but any decent production implementation is compiled to native machine codes -- it just includes compiler (and usually pretty fancy optimizing one!) built into the image and always available.

      Try running, say, SBCL one day before spreading misunderstandings...

      Paul B.

      • Try running, say, SBCL [...] Paul B.

        Hi Paul. Is the B short for "Braham"? ;-)

      • Re: (Score:3, Insightful)

        [Lisp] can be [interpreted]

        Actually, Common Lisp cannot be implemented as a pure interpreter -- there are a few features of the language that you cannot implement without performing a pass over each function.

        (Scheme, the other dominant dialect of Lisp, can be implemented as a pure interpreter, a pure compiler, or a hybrid design.)

    • Re: (Score:2, Interesting)

      by bill_kress ( 99356 )

      Actually, there are things you can do with Java/C# that you can't do with C period.

      C (and even assembly) can't realize that the same inputs to a routine always cause the same output, and therefore cache the return value and just return it (not without a lot of buggy, custom code anyway). (I'm not saying Java/C# DO this, they may--I understand they were working on it... But just trying to give you an idea of what CAN be done)

      Java/C# do compile to assembly by the way--only the parts that are run often. And

      • C (and even assembly) can't realize that the same inputs to a routine always cause the same output, and therefore cache the return value and just return it (not without a lot of buggy, custom code anyway). (I'm not saying Java/C# DO this, they may--I understand they were working on it... But just trying to give you an idea of what CAN be done)

        You could also easily implement that in C. Of course it would only apply to purely functional C functions, i.e. ones which behavior only depends on their arguments, an

      • Actually, there are things you can do with Java/C# that you can't do with C period.

        Really?

        C (and even assembly) can't realize that the same inputs to a routine always cause the same output

        I think the volatile keyword means an external agent can modify the contents of a variable, and if you don't specify volatile, no such agent can. So for all the non-volatile stuff, the compiler can perform the same static analysis that a Java/C# compiler does to decide whether a function has your specified behavior or not.

        And if you're not happy with it, in GCC you can annotate your functions. Wrap the annotation in a portable macro (that does, say, nothing on non-gcc compilers) and Bob's your uncl

      • Re: (Score:3, Insightful)

        by TheRaven64 ( 641858 )

        C (and even assembly) can't realize that the same inputs to a routine always cause the same output, and therefore cache the return value and just return it (not without a lot of buggy, custom code anyway).

        C can't because C is a specification. Now let's talk about implementations. If we're talking two of the popular open source ones, GCC and LLVM, you have __attribute__((const)) and __attribute__((pure)), which mark a function as not reading and not modifying global memory, respectively. A function market with the const attribute will always return the same value given the same inputs.

        Of course, these are hacks, and are only required when the compiler can't see the function body and infer these propert

    • by pz ( 113803 )

      Interpreted languages, in general, are not faster than compiled languages. Period.

      This "faster than C" canard keeps getting trotted out and shot down every time.

      And someday we'll all understand the fact that interpreted languages stopped being interpreted a long time ago and are now compiled before execution, rendering the only difference between a classic compiled language (like, say, C) and an interpreted language (like, say Lisp) is that there's a read-eval-print loop (REPL) interface to the latter. Every time you type an expression into a REPL the compiler gets fired up to process that input into the target language that then gets sent to an execution engine.

  • by omar.sahal ( 687649 ) on Monday May 25, 2009 @12:43PM (#28085497) Homepage Journal

    LISP, the world's second oldest high-level programming language.

    Sorry its the third oldest this [wikipedia.org] is the oldest.
    Designed by Konrad Zuse [wikipedia.org] who also invented the first program-controlled Turing-complete computer. Fortran is the second oldest programming language.

  • that there are really only two very clear style of languages, the C family and the Lisp family, with later languages like Java starting from a C basis but building towards functionalities found in Lisp.

    I wonder what is beyond Lisp and other functional languages, though?

    • by monk ( 1958 )

      well the same thing as always. forth ;)

  • by ZosX ( 517789 ) <zosxavius@gTIGERmail.com minus cat> on Monday May 25, 2009 @12:50PM (#28085569) Homepage

    "Automated garbage collection is rubbish" and "Real men don't use floating point" were two of the most interesting and compelling arguments I've read all week. And using LISP as a web platform framework? Fascinating. There were some great ideas back in the days when computers were in there infancy and a lot of them have been abandoned for the most part. Like trinary computing for instance. The building blocks of the computers that you see today were partially designed because of technological limitations at the time. The mechanical underpinnings of the first computers are still present today. I don't care if I'm really wrong about this point (I've been wrong before), but I think computing really needs to transcend a system based on 0s and 1s. Why not abandon the general purpose cpu altogether or at least reduce it to a single core and focus on multiple cores of different types of chips that are optimized for different types of problems? Larrabee might be an hint of that, though I think that ultimately it will really just be a cost saving measure since the gpu no longer needs to be integrated into the board. I think we may be locked into a future of x86 clones for another 30 years at this rate. The Itanic was a good lesson for intel in how much the market values their older code still being able to run without issue. Forgive my bit of a ramble. Just had a whole bunch of random thoughts there.

  • Based on my theoretical understanding of how computers work, I though HTTP daemon performance depended mostly on

    • I/O performance, much of which is controlled by the kernel (in particular the file system).
    • A good caching strategy (to minimize I/O), again done by the kernel.
    • Good networking performance, controlled by the networking stack in the kernel.
    • Database performance, controlled by the RMS-DB, BDSM(R) or whatever it is.
    • Process spawing speed (for CGI), again controlled by the kernel.

    Would someone care to correct me?

    Note that TFA (well, the slideshow) measures performance in requests per second. That's a very useful measure, but it's compared to Ruby (Mongrel?) and PHP (Apache?). I'm not sure what that comparison means. Does Apache not support lisp, or only as CGI?

    Is there something stopping Apache from being sped up? Is he measuring the performance of LISP, or the performance of a HTTP daemon?

    I'm a bit confused...

    • The web server can be a significant factor in performance then the server is under load, as winessed by Lighttpd being significantly faster then Apache under some workloads and configuration. There are many things the web server can have some affect on (its own memory use, management of fcgi processes, and so forth)

      You say that "requests per second" is a useful measure but without the information you yourself suggest is missing I would not agree that it is. There is a vast range of difference between reques

  • by klapaucjusz ( 1167407 ) on Monday May 25, 2009 @01:05PM (#28085735) Homepage
    C is more readable [jussieu.fr], though.
  • Yeah right. (Score:5, Insightful)

    by stonecypher ( 118140 ) <stonecypher@gmai ... minus physicist> on Monday May 25, 2009 @01:06PM (#28085749) Homepage Journal

    Of course, this guy didn't benchmark against any modern performance kings, such as Nginx, YAWS, htstub or LightStreamer.

    There is no reason to believe this is the world's fastest webserver, and I'm sure as hell not holding my breath.

  • not that I want to pee all over anybodies accomplishments BUT when I see a pure implementation of BSD sockets using pure LISP and IT beats the C version then and only then will I ACK that SYN statement.

  • Isn't LISP an ideal companion to Python? Thnakes in cartoons talk with lithps after all, and Snagglepuss even. LISP maybe eethy to read but it's not easy to thay, that'th why they call it a lithp.
  • by Onyma ( 1018104 ) on Monday May 25, 2009 @01:11PM (#28085825)
    I first learned LISP using the watered down version included in AutoCAD while writing huge customization projects in the 80's. I loved the language so much I dove into it full force and enjoyed it thoroughly. To me it was so inherently elegant I wanted to use it everywhere. Obviously however making a living meant most of us had to focus our energies elsewhere but something like this makes me all giddy again. I think I have some playing to do!
  • ... so I guess it's not fast enough.

If it wasn't for Newton, we wouldn't have to eat bruised apples.

Working...