Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software The Internet

World's "Fastest" Small Web Server Released, Based On LISP 502

Cougem writes "John Fremlin has released what he believes to be the worlds fastest webserver for small dynamic content, teepeedee2. It is written entirely in LISP, the world's second oldest high-level programming language. He gave a talk at the Tokyo Linux Users Group last year, with benchmarks, which he says demonstrate that 'functional programming languages can beat C.' Imagine a small alternative to Ruby on rails, supporting the development of any web application, but much faster."
This discussion has been archived. No new comments can be posted.

World's "Fastest" Small Web Server Released, Based On LISP

Comments Filter:
  • by Anonymous Coward on Monday May 25, 2009 @01:45PM (#28085519)

    We would appreciate you folks taking some time to reflect on our servicemen who gave their lives saving your asses in WW I and II.

    I'm sure the rest of the world appreciates your sitting on your neutral asses for the first couple of years of each of those wars...

  • by ZosX ( 517789 ) <zosxavius@gmQUOTEail.com minus punct> on Monday May 25, 2009 @01:50PM (#28085569) Homepage

    "Automated garbage collection is rubbish" and "Real men don't use floating point" were two of the most interesting and compelling arguments I've read all week. And using LISP as a web platform framework? Fascinating. There were some great ideas back in the days when computers were in there infancy and a lot of them have been abandoned for the most part. Like trinary computing for instance. The building blocks of the computers that you see today were partially designed because of technological limitations at the time. The mechanical underpinnings of the first computers are still present today. I don't care if I'm really wrong about this point (I've been wrong before), but I think computing really needs to transcend a system based on 0s and 1s. Why not abandon the general purpose cpu altogether or at least reduce it to a single core and focus on multiple cores of different types of chips that are optimized for different types of problems? Larrabee might be an hint of that, though I think that ultimately it will really just be a cost saving measure since the gpu no longer needs to be integrated into the board. I think we may be locked into a future of x86 clones for another 30 years at this rate. The Itanic was a good lesson for intel in how much the market values their older code still being able to run without issue. Forgive my bit of a ramble. Just had a whole bunch of random thoughts there.

  • by MADnificent ( 982991 ) on Monday May 25, 2009 @01:51PM (#28085593)

    As you hinted at Common Lisp being an interpreted language, a clarification is in place.

    Most Common Lisp implementations are compiled. As it has been for some time now. Some lisp implementation compile the code themselvel (SBCL for instance), others walk around and compile C-code (ecl for instance).

    An overview of free CL implementations can be found at http://www.cliki.net/Common%20Lisp%20implementation [cliki.net] .

  • by drsmithy ( 35869 ) <drsmithy@nOSPAm.gmail.com> on Monday May 25, 2009 @01:56PM (#28085645)

    Even though Internet throughput seems to be increasing (bandwidth) in leaps and bounds, the server is often a bottle-neck.

    What ? You can buy a quad-core, multi-gigabytes-of-RAM machine for under US$500.

    For web serving, if your webserver hardware is the bottleneck, You're Doing It Wrong.

  • by epiphani ( 254981 ) <epiphani@@@dal...net> on Monday May 25, 2009 @02:06PM (#28085743)

    Modded Troll? Come on guys, its a legitimate challenge - I'd really love to have someone take me up on it. If anyone out there thinks they can honestly write faster code in some higher level language than I can in C, I want to put it to the test. It'll be fun, and I'll happily admit defeat if it can be thus proven. (And I'll take up the competing programming language.)

  • Yeah right. (Score:5, Insightful)

    by stonecypher ( 118140 ) <stonecypher@noSpam.gmail.com> on Monday May 25, 2009 @02:06PM (#28085749) Homepage Journal

    Of course, this guy didn't benchmark against any modern performance kings, such as Nginx, YAWS, htstub or LightStreamer.

    There is no reason to believe this is the world's fastest webserver, and I'm sure as hell not holding my breath.

  • by klapaucjusz ( 1167407 ) on Monday May 25, 2009 @02:09PM (#28085801) Homepage

    [Lisp] can be [interpreted]

    Actually, Common Lisp cannot be implemented as a pure interpreter -- there are a few features of the language that you cannot implement without performing a pass over each function.

    (Scheme, the other dominant dialect of Lisp, can be implemented as a pure interpreter, a pure compiler, or a hybrid design.)

  • by Holmwood ( 899130 ) on Monday May 25, 2009 @02:11PM (#28085819)

    First, his blog is standing up to a slashdotting. That's impressive.

    Dunno about you, buddy, but I find LISP a lot easier to read and write

    Right, but we're not talking about you. I wish we were. If your skills were more common we'd have a better world.

    Second, I can speak up and I'm not even posting as an ac. It's straightforward to find people who can "program" in a language of their choice. It's tougher to find people who can program well in a language of their choice. It's tougher yet to find people who can program well in a language of your choice. It's very tough to find someone who can code well in C and insanely tough to find someone who can code well in LISP.

    It's been my observation -- as someone who has managed to convince many others that he deserves the salary of an "uberprogrammer" -- as I've shifted into running large engineering teams, that perhaps one in twenty programmers can code acceptably in C and perhaps one in two hundred in LISP.

    Third, I'd note there are behaviours of his software that surprised and annoyed some readers -- e.g. column treatment. I'd argue that these are generally buried deeper in LISP code than in C, but that's something we could heartily debate.

    Finally, his code seems typical of what I've seen from good LISP programmers -- including even at times myself. Poor documentation. The code is simple, elegant, and should "speak for itself". Well it doesn't. Not to someone trying to maintain it.

    C programmers -- perhaps because of the nature of the language -- seem less prone to this particular trap, though still bad.

    Regards,
    -Holmwood

  • by Anonymous Coward on Monday May 25, 2009 @02:11PM (#28085823)

    I think you are right that C can take on any higher level programming language in speed. The same could be said with assembly.

    The reason of course we don't write everything in low level programming languages is just the cost of maintaining them (LOC, readability, compatability...). I like what John has done in showing what assumptions should be made in a higher level programming language in order not to compromise a whole lot of speed.

  • by mauddib~ ( 126018 ) on Monday May 25, 2009 @02:12PM (#28085833) Homepage

    Yes, and your caveat is actually the most important element: for projects that need well definable high-level abstractions, or able to operate on mathematically infinite structures, a functional language wins clearly in comparison with C.

    The real question is: allow high profiled lambda abstractions, while keeping space and time complexity as low as an optimized C program.

    Well, just to show you that your challenge is easily met... In Lisp, it is easy to write an assembler, which over time allow the same kind of imperative abstractions as are present in C, thus allowing me to write a program with equal speed as in C.

    Also, when the nature of the input of a high-level programming language changes, it could optimize its data-structures and algorithms to create a better match with that input. Of course, such a thing could also be implemented in C or Pascal, but requires tremediously more effort.

  • by osu-neko ( 2604 ) on Monday May 25, 2009 @02:22PM (#28085969)

    Actually it was the Soviet Union that broke the back of the German army.

    There is a point where oversimplification turns an essentially true idea into an utterly false statement. You passed that point a while back here.

    And if you liberated us in 1945, then why are you still occupying the place with military bases.

    Because you want us to. If you wanted us to leave, we would. There are numerous examples of this.

    The price we are paying being the continued McDonaldization [wikipedia.org] of the place ..

    Given that this is happening everywhere in the world along with modernization, even places where no American soldier has ever set foot, your statement linking the two is utterly absurd.

  • you can't be faster than C, because only C has access to complete 100% of special functionality OS kernels provide, like sendfile(). this challenge is moot and everybody knows it; the point is to _not_ be writing in C and achieve speeds which are respectable and/or fast enough.
  • by daveime ( 1253762 ) on Monday May 25, 2009 @02:50PM (#28086287)

    Or perhaps we are part of the other 95% of the world that aren't American ?

  • by Anonymous Coward on Monday May 25, 2009 @02:51PM (#28086305)

    Well, I'll celebrate memorial day when the Americans start celebrating their liberation from the British by the French on October 19th.

  • by beelsebob ( 529313 ) on Monday May 25, 2009 @02:53PM (#28086329)

    The debian language shootout has a few examples of Functional languages being faster than people's best efforts in C, especially when it comes to parallelisation. I suggest you try and write a regex-dna example that's faster than the Haskell implementation for example.

    Having said that, the point really isn't that it's faster, it's that it's *as fast* - people should be shedding the ideas that functional languages are slow, and unusable, and trying them out in industry, because now the only real risk is that you have dumb coders (entirely possible).

  • He's also right (Score:5, Insightful)

    by Sycraft-fu ( 314770 ) on Monday May 25, 2009 @03:12PM (#28086525)

    Reason being is that C is the closest high level language to how a processor actually operates. A lot of people get confused, or perhaps never really know how a CPU actually works and that no matter what language you code in, it all gets translated in to machine language in the end.

    Now what that means is that there are certain things that, while convenient for humans, have nothing to do with how the processor actually thinks. A good example would be newer replacements for pointers. A lot of people hate pointers, they claim, correctly, that pointers are confusing, and that you can easily cause problems with them. That is all true, however it is also how the CPU actually thinks. The CPU doesn't have advanced concepts of references and of garbage collection and so on. The CPU has data in memory, and pointers to the location of that data. It doesn't even have data types. The data is just binary data. You can store a string, and then run calculations on it using the FPU. Granted you'll get garbage as a result, but there's nothing stopping you from doing it, the CPU has no idea what your data is.

    So, the upshot of this is that C is very close to the bare metal of how the system works. Thus if you are good with it, you can produce extremely efficient code. The higher level stuff may be nice, but it all slows things down. When you have a managed language that takes care of all the nasty stuff for you, well it is spending CPU cycles doing that. Now the tradeoff is quite often worth it, since CPUs have loads of power and maintainability of code is important, but don't trick yourself in to thinking it is more efficient.

    You have to remember that no matter what, there is one and only one way that the machine actually thinks, one way it actually processes information. All the nifty high level programming shit is just to make life easier for the programmers. That's wonderful, but it doesn't give you the most optimized code. Of the high level languages, C retains that crown, and likely always will, because it is the closest to how a CPU actually works. I've seen the joke that "C is a language with all the speed of assembly and all the ease of use of assembly!" There's some truth to that.

    So I have to agree with the grandparent. If the LISP heads think LISP is faster than C, they are kidding themselves. I'm not saying a good LISP program can't be faster than a bad C program, but if you have equal skill in optimization, sorry C will win out because in the end it will generate more efficient machine code and that's all that matters. All the theory of different programming paradigms in the world isn't relevant to how the CPU is actually going to do things.

  • by TheRaven64 ( 641858 ) on Monday May 25, 2009 @03:14PM (#28086549) Journal

    C (and even assembly) can't realize that the same inputs to a routine always cause the same output, and therefore cache the return value and just return it (not without a lot of buggy, custom code anyway).

    C can't because C is a specification. Now let's talk about implementations. If we're talking two of the popular open source ones, GCC and LLVM, you have __attribute__((const)) and __attribute__((pure)), which mark a function as not reading and not modifying global memory, respectively. A function market with the const attribute will always return the same value given the same inputs.

    Of course, these are hacks, and are only required when the compiler can't see the function body and infer these properties itself. Both LLVM and GCC do inter-function constant propagation and LLVM does specialisation too so, if the function body is available while it is compiling the call site, it will generate a specialised version of that function for the specific set of arguments and can inline it. If it evaluates to a constant, or a reused expression, then something like the redundant subexpression elimination pass will remove the duplication.

    Please can we stop with this nonsense that there are 'compiled languages' and 'interpreted languages'. You can compile or interpret any language. A compiler is just a partial evaluation of an interpreter on its input. If you decide to implement C by compiling each compilation unit to LLVM IR and then doing (and caching the results of) link-time optimisation when you run the program, you get most of of the advantages of a JVM. Similarly, if you statically-compile Java you get a different performance profile to JIT'd Java. Which is faster depends on your code and your compiler.

    And yes, before you as, I have worked on a C compiler and written a compiler for a dynamic language (Smalltalk).

  • Re:He's also right (Score:2, Insightful)

    by TheDarkMaster ( 1292526 ) on Monday May 25, 2009 @03:22PM (#28086653)
    Where is my mod points when I need then... This guy needs a +1 Insightful
  • by Futurepower(R) ( 558542 ) on Monday May 25, 2009 @03:40PM (#28086835) Homepage
    "... C is the closest high level language to how a processor actually operates."

    C was meant to be equivalent to a high-level assembly language. Someone told me that he was worried about coding in assembly language for a particular part of a video driver, but he discovered that, in that case, his C compiler wrote perfect assembly language.
  • by the agent man ( 784483 ) on Monday May 25, 2009 @04:04PM (#28087067)
    Is there a point to this challenge? I will prove that anything written in C will not be as fast as my implementation of it in assembler.
  • by ClosedSource ( 238333 ) on Monday May 25, 2009 @04:19PM (#28087205)

    "What C programmers and programmers fail to realize it that there is something called time to Market, something called budget."

    Perhaps, but this thread doesn't support the idea that C programmers "fail to realize" other issues. This is a discussion about speed, not time to market, cost, or maintainability.

  • Re:He's also right (Score:5, Insightful)

    by Sectrish ( 949413 ) on Monday May 25, 2009 @04:37PM (#28087365) Homepage
    I fully agree with your post (I prefer C over most other languages myself for some weird reason, but if it n eeds to be made *right now*, I'll use Python/Perl/Bash/...).

    However, there is an addendum I'd like to make here:

    Some programming languages force you to think in ways that are actually beneficial to the speed of your code, and can outpace the "normal" C program significantly.

    For example, a few months ago I was forced to write something concerning an AI algorithm in Prolog. Now, I was cursing and cursing at it, because the constraint solver built into the prolog compiler was so goddamn restrictive, but that's how constraint solving works. Every time I was thinking to myself: "if I'd have been allowed to build this in C, it would be done yesterday, and probably be faster to boot!"

    But when I ended up finishing it and running it, it was blindingly fast, and I queried my professor about it. He told me that another student some time ago was thinking the same thing as me, and actually made a C variant. It ran 4x as slow as the prolog equivalent even after spending quite some time optimizing (interchanging algorithms and even looking at the assembler code, he told me).

    Then he told me what was causing this discrepancy, as I had always thought that C was the end all be all of performance. It was the restrictive nature of the prolog solver that caused me to put more brain power into the problem, and as such shift work from the computer to the human. Because those same restrictions allowed lots and lots of optimisations (aliasing comes to mind).
  • Re:That's a myth. (Score:5, Insightful)

    by TeknoHog ( 164938 ) on Monday May 25, 2009 @04:50PM (#28087487) Homepage Journal

    Once you get things like branch prediction, speculative execution and pipelining into the picture, no, C isn't really any closer to how the processors operate. Making efficient use of a modern CPU involves detail at a much, much lower level than C exposes.

    The problem at that level is that you'll be seriously bound to a specific architecture. Even C, which is often called a portable assembler, is designed after a certain kind of assembler.

    The somewhat surprising result is that you can also improve performance (compared to plain C) with a higher level language. You need a higher-level perspective to tackle things like vector/parallel processing efficiently.

  • by Kensai7 ( 1005287 ) on Monday May 25, 2009 @04:59PM (#28087583)

    Every language has its own niche and ideal use.

    I like your simple example that shows the merits of the oldest high-level language in what it was designed to do best.

  • by ratboy666 ( 104074 ) <fred_weigel@[ ]mail.com ['hot' in gap]> on Monday May 25, 2009 @06:19PM (#28088301) Journal

    And now for the grand unveil - the SCHEME (LISP) version, with integrated C code for the "super-speedy" inner loop. Just to illustrate that the SAME optimizations are available for the LISP programmer. This can be further micro-optimized.

    Note that it isn't much different from a "C" implementation that has the same optimization. Note the ease of moving between data types (we only worry about in a very cursory way). I am waiting for your pure C implementation.

    # In "normal" scheme syntax. Note escape to six syntax (for the C
    # developer. Replace [lt] with less-than before compiling

    # Define in-line C function for fast factorial within 32 bit constraint
    (c-declare #[lt][lt]c-end

    int fast_factorial(int n) {
        if (n [lt]= 0)
            return 1;
        return n * fast_factorial(n - 1);
    }

    c-end
    )

    (define fast-factorial
        (c-lambda (int) int "fast_factorial"))

    # The constant 12 is the largest factorial that can be computed in a 32 bit int.
    # Notice the function fast-factorial is also escaped: fast-factorial is actually a C expression.
    \int factorial(int n) {
        if (n [lt]= 12) {
            \fast-factorial(n);
        } else {
            n * factorial(n - 1);
        }
    }

  • by shutdown -p now ( 807394 ) on Monday May 25, 2009 @06:36PM (#28088457) Journal

    The popularity of Python is essentially about having a LISP that has a more familiar syntax and interfaces well with C programs. Python isn't LISP but it's not very far off.

    It is very far off. I'm not sure what criteria you're using to determine what's "not very far off", but if it's first-class functions, then most modern mainstream languages (with notable exceptions of C++ and Java) aren't "far off" from Lisp. But I would say that it's a wrong definition.

    What really sets Lisp apart is how the program itself is defined in terms of structures that are fundamental to the language, and how those structures can be easily manipulated in the language itself. Simply put, Lisp - especially Common Lisp (though R6RS is neat, too) - is a pinnacle of metaprogramming so far, and that's what is its defining feature. And Python doesn't come anywhere close to that.

    It's also what makes Lisp so hard to work. Yes, you can use macros to enable extremely high level of code reuse, effectively 100% (if theres any kind of pattern in your code, you can write a macro to encapsulate that). But it also means that you're effectively defining your own DSL, and then writing your program in that - and when someone else needs to understand and maintain your code, they'll have to figure out that DSL first.

    This isn't really fundamentally different from plain function/class libraries (they are also DSLs), but the expressivity of macros is so much higher than plain function calls (even with Smalltalk/Ruby style blocks and other such facilities) - and, consequently, so is their complexity. Idiomatic Lisp is invariably harder to understand than idiomatic C++, much less Python or Java.

  • Re:That's a myth. (Score:5, Insightful)

    by hawk ( 1151 ) <hawk@eyry.org> on Monday May 25, 2009 @07:38PM (#28088953) Journal

    It's about using the right tool for the job.

    Some years, I do heavy computational programming. No so much number crunching, as bashing them into submission.

    I can do this *much* faster in a modern Fortran (90 or later) than in C. Not because C can't do the same things, and not because an optimized C can't get to the same results.

    The difference is that I can sit down and simply enter near algorithms of matrix math into Fortran, and the optimizer will go to town and give me near perfect code, which will be faster than what I would get with C (which would also take significantly longer to code). A skilled C coder could do a better job, and ultimately get roughly the same performance as I got from Fortran.

    This isn't because "Fortran is better than C," but because Fortran is designed and optimized by people who do this kind of stuff *to* do this kind of stuff. It can make very strong assumptions that C can't (and much of the hand-optimizing of C is essentially replicating these assumptions).

    OTHO, writing a modern operating system in Fortran *could* be done, but it would be painful, would take far longer to code, and would probably have atrocious performance.

    note: I believe that Pr1mos was largely written in FORTRAN IV.

    hawk

  • Horse to water... (Score:3, Insightful)

    by ratboy666 ( 104074 ) <fred_weigel@[ ]mail.com ['hot' in gap]> on Monday May 25, 2009 @08:17PM (#28089321) Journal

    We now come to the final post in this series.

    The speed in C comes from the direct hardware "low-level" thing. In the domain of "int" types on a 32 bit machine, I would say that the most efficient (well, one of the most efficient) implementations of factorial() would be:

    #define factorial(n) ((n) 0 ? 1 : fact_table[n])
    int fact_table[] = { 1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 39916800, 479001600 };

    Can you hear the l33t-speak? I codez the C r3al g00d! My factorial function only takes a nanosecond or so! And, it ONLY takes two instructions! No, let's do better. We can DOUBLE the speed of this puppy. Just assume that negative numbers will never be passed in.

    #define factorial(n) fact_table[n]

    That's one fast function now!

    This is 32 bit int type only. Not a very interesting or useful case. 64 bits? Brings us to 20! or so. It is certainly a major benefit to Scheme to have the bignum built-in. Have you finished the C version without gmp yet?
     

  • by Jurily ( 900488 ) <jurily&gmail,com> on Tuesday May 26, 2009 @03:16AM (#28092029)

    C was meant to be equivalent to a high-level assembly language.

    No, it was a general-purpose high-level language. It's just that our definition of "high-level" has changed over the past 40 years. Now we have Python and Ruby.

  • Re:He's also right (Score:3, Insightful)

    by amn108 ( 1231606 ) on Tuesday May 26, 2009 @06:35AM (#28092885)

    Oh, it is NOT because they TRY to mimic C, I assure you.

    C is closest to the metal, after assembly language. General purpose computing on proprietary vector processors is something new. Naturally, just as with earliest computers, people are careful and the first compiler to pop up is the very straight-forward C-like, which is a natural first (and simplest) choice. C was designed to be close to metal, but this also made for a very simple compilers. C is very efficient in that sense - for RASP (Random Access Stored Program) Von Neumann architectures, which 99% of todays computers are - it is a real good compromise for a language that is quite readable and yet does not require state of art compiler to translate REALLY efficiently for the machine to understand. So, I think, they did not mimic C to run shaders for its beauty and programming potential, it is just that C is the next step of programming these shaders above bare assembly.

    Just as with all software however, in my opinion, C becomes more difficult to maintain for very large programs, because the programmer tends to loose oversight of the problem among all those close-to-metal C primitives and/or misses on optimization, assuming stuff without profiling or inevitably negating optimization for one platform while trying to adapt for the other with his cross-platform C.

    Anyways I may be wrong here, but from my experience I start to lean heavily towards starting to trust automated translators (compilers) more than I trust MYSELF when optimizing C code, and I know a lot about optimizing C code. Of course we are very far off from having compilers produce the kind of inverse square root functions John Carmack has written, but then again, an inverse square root function is a mathematical function, not something a compiler needs to write itself.

  • by shutdown -p now ( 807394 ) on Tuesday May 26, 2009 @01:33PM (#28097601) Journal

    I find your whole comment rather odd. You're essentially saying "it's not fair, the Haskell compiler optimises code better than the C compiler does". The Haskell compiler has recognised that the task it's being asked to do does not require everything to be evaluated, and hence is optimising that code away.

    The bottom line is if you're asked "produce this output", and you do so then it's entirely valid. The fact that Haskell is amenable to massive optimisation like that is something the shootout should be demonstrating.

    *sigh*

    I'm not saying the test is "unfair". I'm merely saying that it is incorrect.

    If the point of the test is to determine actual performance of e.g. sorting algorithm, then the test should at least make sure that the output of sorting is used (e.g. displayed)! It doesn't matter how the compiler does the sorting - let it optimize it the best way it can - but it should be forced to produce the complete sorted sequence either way. Otherwise you aren't really measuring sorting... you're measuring how well the compiler can detect and remove dead code, and the test is meaningless for its original goal. Because when you write an real-world application in Haskell that does use the output of the algorithm, then you'll hit the actual performance figures, and not the imaginary ones from the Shootout.

    Let me put it that way. Let's say I design a language with all the usual operators (imperative, for simplicity sake), Turing-complete, but no output facilities - the only thing it can output is the time it took to run the program. I then write a compiler for it, which - since there's no way to output information - just discards all computations, and prints "0 milliseconds" to the output. I then claim that this language clearly beats any other language in the Shootout on any test. Which it completely factually correct, if the test output consists only of time it took to run it - but how useful would such a language be to you in practice?

    And to fire back at the "unfairness" of the shootout, Haskell has been banned from putting a pragma in some of its programs that specifies how the garbage collector should behave. Why? Because when that's done, the code runs 4 times faster, and beats C.

    I wonder what the pragma is - cannot find anything in Google. Is it something that basically results in it not freeing memory at all for the duration of the test, and letting the system do it when the process shuts down?

  • Aw shucks, you caught me out! You AC guys...

    Sure, since Gambit-C uses C as its "assembler" level, it is possible to micro-optimize. In Post #2, I illustrate how to deal with that. In Post #3 you see how I deal with the "low-level advantage" issue.

    And, in response to you: I challenge that YOU cannot beat Gambit-C, even given six months implementing this stupid little useless program. Without resorting to pre-written code, that is, in which case *I* get to use the same libraries.

    To WIN, you have to show a 5% advantage in run-time. Otherwise, Scheme takes it. Simply because it IS a true HLL, and can represent stuff at a reasonable level, and is completely "safe".

    Go for it.

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford

Working...